PGMODAYS 2019: PGMODAYS 2019
PROGRAM FOR WEDNESDAY, DECEMBER 4TH
Days:
previous day
all days

View: session overviewtalk overview

08:30-09:00Coffee
09:00-10:30 Session 1A: Algebraic and geometric tools for optimisation and statistics

Invited session organized by Elias Tsigaridas

Location: Amphi 1
09:00
Solving sparse polynomial systems using Groebner bases

ABSTRACT. Gröbner bases are one the most powerful tools in algorithmic non-linear algebra. Their computation is an intrinsically hard problem with complexity at least single exponential in the number of variables. However, in most of the cases, the polynomial systems coming from applications have some kind of structure. For example, many problems in statistics and optimization involve sparse systems where the input polynomials have a few non-zero terms.

We present an algorithm to compute Gröbner basis that exploits the sparsity of the polynomials by exploiting their Newton polytopes. We do not assume that all the polynomials have the same sparsity structure, that is the same Newton polytope. This is the first algorithm with these properties. Under regularity assumptions, it performs no redundant computations and allows us to solve sparse polynomials systems over the torus.

The complexity of the algorithm depends on the Newton polytopes and it is similar to the complexity of the algorithms based on sparse resultants. Hence, we close a 25-years (complexity) gap between solving strategies based on resultants and Gröbner bases. Additionally, for particular families of sparse systems, we use the multigraded Castelnuovo-Mumford regularity to further improve the complexity bounds. Joint work with Matias Bender and Jean-Charles Faugère.

09:30
Solving linear Diophantine systems through Polyhedral Omega and applications in optimization

ABSTRACT. Polyhedral Omega (PO) is an algorithm for solving linear Diophantine systems, i.e., for computing a multivariate rational function representation of the set of all non-negative integer solutions to a system of linear equations and inequalities. PO combines methods from partition analysis with methods from polyhedral geometry. In particular, we combine MacMahon's iterative approach based on the Omega operator and explicit formulas for its evaluation with geometric tools such as Brion's decompositions and Barvinok's short rational function representations. This synthesis of ideas makes Polyhedral Omega the simplest algorithm for solving linear Diophantine systems available to date. Given a rational generating function representation of a feasible region defined by linear equations and inequalities, there are different ways of obtaining the optimum with respect to some linear functional. One advantage of PO is that it can obtain a decomposition of the feasible region to simplicial cones (called symbolic cones), without resorting to rational function representations. After presenting Polyhedral Omega, we will discuss possible applications in optimization.

10:00
Combinatorial Matrix Theory in Multivariate Statistics: the Case of Structural Equation Models
PRESENTER: Marc Harkonen

ABSTRACT. Many operations on matrices can be viewed from a combinatorial point of view by considering graphs associated to the matrix. For example, the determinant and inverse of a matrix can be computed from the linear subgraphs and 1-connections of the Coates digraph associated to the matrix. This combinatorial approach also naturally takes advantage of the sparsity structure of the matrix, which makes it ideal for applications in linear structural equation models. Another advantage of these combinatorial methods is the fact that they are often agnostic on whether the mixed graph contains cycles. As an example, we obtain a symbolic representation of the entries of the covariance matrix as a finite sum. In general, this sum will become similar to the well known trek rule, but where each half of the trek is a 1-connection instead of a path. This method of computing the covariance matrix can be easily implemented in computer algebra systems, and scales extremely well when the mixed graph has few cycles.

09:00-10:30 Session 1B: Recent Advances in Optimization for Machine Learning, part I

Invited session organized by Émilie Chouzenoux and Jean-Christophe Pesquet

Location: Amphi 2
09:00
Model Consistency for Learning with Low-complexity Priors

ABSTRACT. We consider supervised learning problems where the prior on the underlying distribution is an assumption of low complexity (such as low rank or structured sparsity). An important question in that setting is the one of model consistency, which asks whether or not an estimator shares the same low complexity.

It is known in Inverse Problems that model consistency holds under appropriate non-degeneracy conditions. However such conditions typically fail for highly correlated designs (typical in learning) and it is observed that estimators obtained by regularization methods, eventually approximated with stochastic optimization methods, tend to select models having a larger complexity.

In this talk, we provide the theoretical underpinning of this behavior for a large class of low-complexity models, and we show that estimators obey a sandwich principle. More precisely, their complexity is always found in between the complexity of the expected distribution, and the complexity of a certain dual certificate. We will also investigate the role of stochastic algorithms in this setting, and underline the need for using variance-reduced methods for this result to hold.

09:30
Matrix Completion on Graphs: Modeling Interactions

ABSTRACT. Various biological and human-sensing problems can be recast as link prediction between two graphs. For example consider the problem of drug-target repositioning [1, 2]. Some of the drugs are known to interact with certain target proteins, the goal is to see if the same drugs will interact with other targets as well. Without any knowledge about drugs and targets, the best approach is to frame it as matrix completion [3]. In practice, meta-information about both drugs and targets are known, from which fully connected graphs can be constructed among them. In such a case, the problem needs to reformulated as predicting links between the drug and target graphs [4]. In this talk, we will touch upon various problems arising in social networks [5] and bioinformatics [6, 7], that can be modeled thus. We will talk about various signal processing and machine learning models, including deep learning, that are being currently developed to tackle the problem.

10:00
Improving the convergence of Stochastically perturbed Nesterov's accelerated Forward-Backward

ABSTRACT. Motivated by problems arising in Statistical Learning, we consider the convergence of perturbed Nesterov's accelerated forward-backward algorithm when applied to compute the minimum of a convex composite function. We address the case when the perturbation only deals with the gradient step, with a special emphasis on situations when the gradient is an expectation : different Monte Carlo strategies will be discussed - including biased Monte Carlo -, possibly combined with averaging techniques and over-relaxations. We will first show the need to improve on previous results in the literature in the case of (biased) stochastic approximations; then, we will discuss the impact of these algorithmic strategies on the rate of convergence, both theoretically in terms of long-time behavior of the functional, and numerically.

This talk is based on joint works with J.F. Aujol (IMB), P. Gach (IMT) and E. Moulines (Ecole Polytechnique).

09:00-10:30 Session 1C: Telecommunications I
Location: A1.140
09:00
Routing and Slot Allocation in 5G Hard Slicing
PRESENTER: Nicolas Huin

ABSTRACT. 5G promises to improve the scalability, end-to-end latency, and capacity of networks. It also aims to ease the management of networks by allowing network operators to slice their network, subdividing the resource between tenants, which may have different Quality of Service constraints. While current solutions (soft slicing) provide logical isolation, tenants can still impact each other when overloading their slices.

Flex Ethernet (FlexE) is a standard technology that provides strict isolation between slices (hard slicing) by using time multiplexing and a schedule with fixed-size slots. Each slice is assigned to a given number of slots and cannot overflow to others slices slots. But the smallest granularity possible is 1 Gb: using FlexE can lead to wasted allocated bandwidth.

For a given slice, i.e., a set of un-splittable requests with QoS constraints, finding the routing of each request and the slot allocation of each link defines the Routing and Slot Allocation problem. We propose a matheuristic approach based on Column Generation to solve it. It provides solution, on an IP-RAN scenarios, with an optimality gap smaller than 7%, while reducing the reservation cost by 4% compared to a greedy algorithm. We further improve our lower bounds by proposing new valid inequalities.

09:30
Routing in deterministic networks

ABSTRACT. Historically, vertically integrated industrial applications are migrating to versatile IP infras- tructures. In order to support tight latency and reliability requirements, Internet standards are evolving to ensure (i) deterministic latency and bounded jitter, and (ii) zero packet losses in the network. In the context of deterministic networks, we present two centralized algorithms: one that maximizes trac acceptance during the planning phase and one that achieves online admission of ows. With the enhancement of the model, we improve the upper bound and the quality of column-based solutions for the planning problem. We show in realistic cases that we reach an optimality gap of a few percent in a few seconds. Finally, we propose an ultra-fast and adaptive greedy algorithm that solves the online admission problem.

10:00
Extended formulations for the Virtual Network Functions Placement and Routing Problem
PRESENTER: Ahlam Mouaci

ABSTRACT. In this talk we present a problem faced by network service providers in which a set of Virtual Network Functions (VNFs) has to be installed in a telecommunication network at minimum cost. For each given origin-destination pair of nodes (commodities), a latency-constrained routing path has to be found that visits the required VNFs in a pre-defined order. We have proven that the problem is NP-hard in a strong sense, even for a single commodity and without latency and precedence constraints. In this presentation we provide two extended formulations to model it, along with two families of valid inequalities. To tackle the problem from a computational perspective, we provide theoretical results that allow us to relax one family of variables to do our Branch-and-Price algorithm. We also propose an alternative path-based MILP formulation which is used to provide heuristic solutions. All these ingredients are combined in a Branch-and-Price framework and computationally tested on a wide range of realistic instances. Our results are also compared with Integer Linear compact formulation. Computational results indicate that our both extended formulations are more efficient compared to the compact formulation in term of bounds. The results also indicate that our MILP-heuristic provides high-quality solutions.

09:00-10:30 Session 1D: Applications: Transportation
Location: A1.134
09:00
Improving MCTS approaches for Vehicle Routing Problems
PRESENTER: Thomas Triboulet

ABSTRACT. Progress in Tree Search techniques enable to test these methods on Vehicle Routing Problems. Nested Research Policy Adaptation is a recent refinement of MCTS and was tested on TSP [2] and VRP [3] problems. The objective of this work was to improve NRPA algorithm and adapt it to actual business problems.

09:30
A DSS based on hybrid metaheuristic for waste collection routing problem with time windows
PRESENTER: Islem Kaabachi

ABSTRACT. In this paper, a specific problem in the context of solid waste collection namely municipal solid waste collection problem is discussed. In the literature, the waste collection problem is mostly modeled as a VRPTW, but to model the issue in the real world, the collection is mainly performed using trucks with different capacities, which yields to the context of using heterogeneous feet to perform waste collection. The main objective of this problem is to minimize the total traveled distance while using a minimum number of trucks in order to decrease the transportation cost. A mathematical model is proposed in which the total collected waste for each collection area do not exceed the truck capacity and the truck must return to the depot after loading waste in the landll. Moreover, the waste collection must be within the service time window. The performance of the proposed algorithm is highlighted through the implementation of a DSS. The computational results show that the proposed algorithm reached optimal solutions. The results are also compared to existing approaches for evaluating the performance of the proposed algorithm. Moreover, it is tested using a real case study of the solid waste collection process applied by a Tunisian municipality.

10:00
Delay propagation on a suburban railway network
PRESENTER: Guillaume Dalle

ABSTRACT. A new method is introduced to predict train delays on a suburban railway network. Delay propagation is accounted for using a Dynamic Bayesian Network related to the underlying infrastructure graph. We analyze a simplified model from a statistical point of view, and obtain a lower bound on the minimax estimation risk. We propose a generic implementation using Stochastic Variational Inference and the Pyro library to separate estimation from modeling. We present numerical tests on both simulated data and real event logs from the Paris RER network.

09:00-10:30 Session 1E: Stochastic Optimal Control
Location: A1.133
09:00
Linear–Quadratic optimal control for a class of stochastic Volterra equations: solvability and approximation
PRESENTER: Enzo Miller

ABSTRACT. We provide an exhaustive treatment of Linear--Quadratic optimal control problems for a class of stochastic Volterra equations of convolution type, whose kernels are Laplace transforms of certain signed matrix measures which are not necessarily finite. These equations are in general neither Markovian nor semimartingales, and include equations with delays or the fractional Brownian motion with Hurst index smaller than $1/2$ as a special case. We establish the correspondence of the initial problem with a possibly infinite dimensional Markovian one in a Banach space, which allows us to identify the Markovian state variables. Using a refined martingality verification argument combined with a completion of squares technique, we prove that the value function is of linear quadratic form in these state variables with a linear optimal feedback control, depending on Banach space valued Riccati equations, for which we provide generic existence and uniqueness results. Furthermore, we show that the value function of the stochastic Volterra optimization problem can be approximated by that of conventional finite dimensional Markovian Linear--Quadratic problems, which is of crucial importance for numerical implementation.

09:30
Optimal Probabilistic Control of the Vibrating String under Random Initial Conditions

ABSTRACT. Probabilistic constraints have recently attracted growing interest in PDE constrained optimization with uncertain parameters. For instance, in gas transport one may be interested to drive some current initial state of the system to stationarity within a given period of time. Most typically, the initial state is not precisely known but may be estimated by means of measurements. This uncertainty leads, given any control, to an uncertain terminal state of the system too. Hence, one can only hope for some `epsilon-stationary' terminal state achieved with a given minimum probability. Under certain simplifying assumptions the gas dynamics can be reduced to a couple of one-dimensional wave equations. As a preparatory step to solve the original problem, we therefore consider the optimal probabilistic Neumann boundary control of a vibrating string under uncertain initial conditions. We present the model, its structural properties and its algorithmic solution along with numerical results.

10:00
Representation formulas for limit values of long run stochastic optimal controls
PRESENTER: Marc Quincampoix

ABSTRACT. A classical problem in stochastic ergodic control consists of studying the limit behavior of the optimal value of a discounted integral in infinite horizon (the so called Abel mean of an integral cost) as the discount factor tends to zero or the value defined with a Cesaro mean of an integral cost when the horizon tends to infinity. We investigate the possible limits in the norm of uniform convergence topology of values defined through Abel mean or Cesaro means. We give two types of new representation formulas for the accumulation points of the values when the averaging parameter converges. We show that there is only one possible accumulation point which is the same for Abel means or Cesaro means. The first type of representation formula is based on probability measures on the product of the state space and the control state space, which are limits of occupational measures. The second type of representation formulas is based on measures which are the projection of invariant measure on the space of relaxed controls. We also give a result comparing the both sets of measures involved in the both classes of representation formulas.

09:00-10:30 Session 1F: Bilevel Programming and Applications, part I

Invited session organized by Luce Brotcorne and Miguel Anjos

Location: A1.128
09:00
Multiple Decision-Makers in a Collective Self-consumption context: Bilevel optimization
PRESENTER: Saad Balbiyad

ABSTRACT. In a collective self-consumption context, we focus on a special case of agreements where multiple Decision-Makers with conflicting objectives are allowed. This generates a hierarchical decision problem which is modeled as a bilevel optimization problem. The aggregator is the leader and agreement's members are followers.

The obtained bilevel problem is solved by using a single-level reduction by expressing the lower level's optimality with KKT conditions. We explore two kinds of formulations. The first one gives a Mathematical Program with Complementarity Constraints (MPCC), and the second one gives a Non Convex Quadratically Constrained Program (QCP). Solving linear approximations for the obtained problems, we get an UB and a LB for the original bilevel program.

Results provide a practical tool to visualize clients possible behavior in a collective self-consumption agreement.

09:30
Minimizing Energy and Link capacity Utilization in ISP Backbone Networks: Bi-level approach
PRESENTER: Ikram Bouras

ABSTRACT. In this work, we are interested in the problem of energy-aware Traffic Engineering while using multi-path routing (ETE-MPR) to minimize link capacity utilization in ISP backbone networks. To this end, we propose a bi-level optimization model where the upper level represents the energy management function and the lower level refers to the deployed multi-path routing protocol. Then, we reformulate it as a one-level MILP replacing the second level problem by different sets of optimality conditions. We further use these formulations to solve the problem with classical branch-and-bound, cutting plane and branch-and-cut algorithms. The computational experiments are performed on real instances to compare the proposed algorithms and to evaluate the efficiency of our model against the existing single-path and multi-objective approaches.

10:00
Random Allocations, Fairness and Implementability in Stackelberg Security Games
PRESENTER: Victor Bucarey

ABSTRACT. In this talk we discuss the impact of fairness constraints in Stackelberg Security Games. Fairness constraints can be used to avoid discrimination at the moment of implementing police patrolling. We present two ways of modelling fairness constraints, one with a detailed description of the population and the other with labels. We discuss the implementability of these constraints. In the case that the constraints are not implementable we present models to retrieve pure strategies in a way that they are the closest in average to the set of fairness constraints. This last problem has a nice interpretation as a Combinatorial Optimization Problem.

09:00-10:30 Session 1G: Optimization of energy
Location: A1.122
09:00
McKean Stochastic Optimal Control of an Energy Storage System to reduce demand variability

ABSTRACT. We consider a smart-grid, characterized by its own exogenous production and consumption, connected to the network and to a controllable energy storage system. We control this storage system and aim at limiting simultaneously the power demand peaks on the network, the storage system aging and the fluctuations of the power supplied by the network. While the first two components of the cost are related to the trajectory of the system, and therefore to the standard stochastic optimal control paradigm, the last one is more related to the probability distribution of the system, and therefore to the McKean stochastic optimal control paradigm.

We propose here to model the problem of energy storage control as a McKean stochastic optimal control problem. We derive necessary and sufficient optimality conditions, using the stochastic Pontryagyn principle. Some existence results for a solution to this class of problems are obtained in the general case. In the Linear-Quadratic case, some explicit closed-loop feedback formulas for the optimal control are available. In the non-quadratic case, an approximation of the optimal control can be efficiently computed in some cases. The performance of our approach is demonstrated through numerical examples in both cases.

09:30
Arbitrage with Power Factor Correction using Energy Storage
PRESENTER: Md Umar Hashmi

ABSTRACT. The importance of reactive power compensation for power factor (PF) correction will significantly increase with the large-scale integration of distributed generation interfaced via inverters producing only active power. In this work, we focus on co-optimizing energy storage for performing energy arbitrage as well as local power factor corrections. The joint optimization problem is non-convex, but can be solved efficiently using a McCormick relaxation along with penalty-based schemes. Using numerical simulations on real data and realistic storage profiles, we show that energy storage can correct PF locally without reducing arbitrage profit. It is observed that active and reactive power control is largely decoupled in nature for performing arbitrage and PF correction (PFC). Furthermore, we consider a stochastic online formulation of the problem with uncertain load, renewable and pricing profiles. We develop a model predictive control based storage control policy using auto-regressive forecast for the uncertainty. Using numerical simulations we observe that PFC is primarily governed by the size of the converter and therefore, look-ahead in time in the online setting does not affect PFC noticeably. However, arbitrage profit are more sensitive to uncertainty for batteries with faster ramp rates compared to slow ramping batteries.

10:00
Chance Constrained AC Optimal Power Flow with Sparse Polynomial Chaos Expansion
PRESENTER: David Métivier

ABSTRACT. Traditional power systems operational planning and management is being challenged by the increased penetration of renewable energy and distributed energy resources. The variability of power consumption and generation inherent to these additions calls for new control and optimization tools capable of accurately handling the impact of uncertainty on a faithful, nonlinear description of the power grid. In this work, we develop a Sparse Polynomial Chaos Expansion algorithm for integrating uncertainty into AC Optimal Power Flow (ACOPF) problem in a scalable way. We consider the chance constrained ACOPF which ensures that safety limits are enforced with a prescribed probability.

09:00-10:30 Session 1H: Applications: Stochastic Optimization and Energy
Location: A1.139
09:00
Design of a charging infrastructure for electric vehicles under a stochastic driving range
PRESENTER: Céline Gicquel

ABSTRACT. We seek to optimize the deployment of an infrastructure based on fast charging stations in order to enable electric vehicles to travel long-distance trips. In the resulting facility location problem, the demand is not located at nodes of the underlying network but is represented by a set of trips to be refueled. A trip is refueled if, for each pair of consecutive stations visited when traveling along the corresponding path, the energy available in the battery when leaving the first station is high enough to provide the power needed by the vehicle to reach the second station. We investigate a variant of this problem in which the uncertainties on the battery energy status and the power consumption are explicitly considered. This leads to the formulation of a chance-constraint program seeking to maximize the number of drivers for which the risk of running out of fuel when carrying out their trip is below a certain threshold. We propose a solution approach based on a partial sample approximation of the stochastic parameters and carry out numerical experiments on medium-size instances. Our results show that the proposed approach outperforms a previously published approach based on Bonferroni’s inequality in terms of solution quality.

09:30
Stochastic Model for the Uncertainties in the Long-Term Prediction of Run-of-River Hydropower Generation
PRESENTER: Valentina Sessa

ABSTRACT. The European project Clim2power also deals with the daily prediction of the national run-of-river hydropower generation based on the precipitations and the air temperature of some European climate regions. This is not an easy task as it is necessary to capture the complex relationship between the availability of water and the generation of electricity. Indeed, this kind of hydropower generation is limited by the flow of the river in which the power plants are located. In this work, the prediction of the hydropower capacity factor is obtained by using machine learning (ML) techniques. We compare the performance of several well-established regression algorithms in terms of correlation coefficient, mean absolute and mean square percentage errors. Our preliminary experiments showed that the algorithms based on ensemble of trees and the artificial neural networks perform quite well in most of the evaluative criteria. An additional output provided by the ML algorithms is the corresponding modeling error. We propose to build a stochastic model for this uncertainty whose dynamics aims to reproduce the statistical characteristics of the prediction deviation with respect to its long-term mean, and so be able to enrich the ML prediction with probabilistic anomalies indicators aggregated at the scale of countries.

10:00
Strategic behaviour of risk-averse agents under stochastic market clearing
PRESENTER: Vincent Leclere

ABSTRACT. We consider risk-averse energy producer and consumer participating in a complete market. Instead of assuming a price-taking approach toward the risk trading assets, we assume strategic behaviour. In other term we consider the market power of agents misrepresenting their risk.

09:00-10:40 Session 1I: IRSDI review session, part I
Location: A1.116
09:00
Generating Specialized Biased Experts for Electricity and Pollution Forecasting using Online Mixture

ABSTRACT. The practical interest of using ensemble methods has been highlighted in several works. Sequential prediction provides a natural framework for adapting ensemble methods to time series data. In the context of sequential prediction, experts make predictions at each time instant, and the forecaster must determine step by step, the future values of an observed time series. The aim of this project is to add insight on how these online mixture methods succeed to enhance the prediction performance over individual experts. For this, we mainly adopt an applied point of view, making extensive use of numerical experiments.

09:25
Privacy Preserving Synthetic Smart Meters Data

ABSTRACT. Power consumption readings taken from the users of an electricity company are very useful data that can have a considerably impact in optimizing the performance of Power Grids. In addition, live power consumption readings can help to detect anomalies and prevent faults in power grids. However, this data contains sensitive information about households from which readings were taken. The goal of our work is to generate artificial power consumption curves that are faithful to natural ones while ensuring privacy for all households used for training the corresponding generative models. We introduce a novel network architecture based on convolutional neural networks. To assess the quality of the produced curves, we make use of different indicators (e.g., mean, maximum-mean ratio, skewness). The performances are investigated on real-data from energy consumption households.

09:50
Co-Clustering for multivariate functional data for the Analysis of Electric Power Consumption
PRESENTER: Julien Jacques

ABSTRACT. After the installation of 300.000 smart meters "Linky" between 2009 and 2011 in the area of Lyon and Tours, the authorities have decided to generalize these meters throughout the territory. By 2021, 35 million meters should be replaced in French households by Linky meters, allowing electricity operators to record electricity consumption. For an operator like EDF with 27 millions of residential dwellings, these new smart meters represent a great opportunity to gather customer consumption data and therefore to improve client knowledge. Indeed, so far, customer data were recorded only every six months, while with the smart meter, the data can be taken up to every second. In practice, EDF plans to access the data every half hour, which means 17472 measures per year for each of the 27 million clients. The Linky meters do not only measures electric power consumption but also other parameter as indoor and outdoor temperatures of households. Consequently, we don’t have anymore one times series by customer but several ones. In the present work, an extension of the funLBM co-clustering model to the case of multivariate curves is proposed to build « summaries » of these data.

10:15
Signature-Based Disaggregation of Electricity Demand
PRESENTER: Themis Palpanas

ABSTRACT. Several applications need access to detailed knowledge of the electricity consumption patterns of individual consumers. Therefore, there is an increasingly pressing need for analyzing detailed smart meter data, which collect sequences (or time series) of electricity consumption data, aggregated at the level of households. In our work, we describe scalable and robust techniques for disaggregating the smart meter data, in order to identify the signatures, namely, the characteristic usage patterns, of different appliances.

10:30-11:00Coffee
11:00-13:00 Session 2A: Polynomial optimization and applications

Invited session organized by Simone Naldi, Mohab Safey El Din and Victor Magron

Location: Amphi 1
11:00
Partial Recovery in the Graph Alignment Problem

ABSTRACT. We consider two adjacency matrices A and B, obtained by generating two correlated (n,q,s) Erdös-Renyi graphs with adjacency matrices A1 and A2, and taking A = A1 and B = Pi^*TA2Pi^*, where Pi^* is a permutation matrix. We present conditions on n, q, and s under which one can recover, both theoretically and computationally, a permutation matrix whose overlap with Pi^* is larger than a fixed fraction of the nodes. We also present conditions on n, q and s under which recovery is impossible.

11:30
A moment approach for entropy solutions to scalar conservation laws
PRESENTER: Swann Marx

ABSTRACT. In this talk, we will propose a new numerical scheme, based on the so called Lasserre hierarchy, which solves scalar conservation laws. Our approach is based on a very weak notion of solution introduced by DiPerna in 1985, which is called entropy measure-valued solution. Among other nice properties, this formulation is linear in a Borel measure, which is the unknown of the equation, and moreover it is equivalent to the well-known entropy solution formulation. Our aim is to explain that the Lasserre hierarchy allows to solve such a linear equation without relying on a mesh, but rather by truncating the moments of the measure under consideration up to a certain degree. This talk is based on some recent results jointly obtained with Didier Henrion, Jean Bernard Lasserre and Tillmann Weisser.

12:00
Globally Optimal Solution to Inverse Kinematics of 7DOF Serial Manipulator
PRESENTER: Pavel Trutman

ABSTRACT. The Inverse Kinematics (IK) problem is to find robot control parameters to bring it into the desired position under the kinematics and collision constraints. We present a global solution to the optimal IK problem for a general serial 7DOF manipulator with revolute joints and a quadratic polynomial objective function. We show that the kinematic constraints due to rotations can all be generated by second degree polynomials. This is important since it significantly simplifies further step where we find the optimal solution by Lasserre relaxations of a non-convex polynomial systems. We demonstrate that the second relaxation is sufficient to solve 7DOF IK problem. Our approach is certifiably globally optimal. We demonstrate the method on 7DOF KUKA LBR IIWA manipulator and show that we are able to compute the optimal IK or certify in-feasibility in 99.9% tested poses.

12:30
The Impact of Quadratization in Convexification-Based Resolution of Polynomial Binary Optimization

ABSTRACT. Polynomial Binary Optimization (PBO) without additional constraints is a very general class of problems finding applications in many different areas. Recently, there have been significant methodological advances in solving (PBO) problems of degree larger than or equal to three. We consider resolution approaches for (PBO) based on the idea of first reformulating the higher-degree problem into an equivalent quadratic one (quadratization) and then solving the quadratization by convexification. Given a quadratization of (PBO), a method called Polynomial Quadratic Convex Reformulation (PQCR) has been recently introduced in the literature. (PQCR) solves the convexification step by building a family of parameterized equivalent convex reformulations of the quadratization and choosing among these the formulation that provides the tightest continuous relaxation bound, which is computed using semidefinite programming techniques. This strongest convex reformulation is then solved using a branch-and-bound algorithm, the performance of which potentially benefits from the sharp bound of the formulation. Our objective is to get further improvements in performance by integrating into (PQCR) alternative quadratization procedures that have been recently proposed in the literature.

11:00-12:30 Session 2B: Recent Advances in Optimization for Machine Learning, part II

Invited session organized by Émilie Chouzenoux and Jean-Christophe Pesquet

Location: Amphi 2
11:00
An Inertial Newton Algorithm for Deep Learning with Convergence Guarantees
PRESENTER: Camille Castera

ABSTRACT. We introduce a new second-order inertial method for machine learning called INDIAN (Castera et al. 2019), exploiting the geometry of the loss function while requiring only stochastic approximations function values and generalized gradients. This makes the method fully implementable and adapted to large scale optimization problems such as the training of a deep neural network. The algorithm combines both gradient-descent and Newton features as well as inertia. It is a discretized and split version of a dynamical system introduced in (Alvarez et al. 2002). We provide a strong meaning to each hyperparameter of the model by making a connection to Newton’s second law. We prove the convergence of INDIAN to critical points for almost any classical deep learning problems. To do so, we provide a well suited framework to analyze deep learning losses, involving tame optimization and Clarke subdifferential (Clarke, 1990, Bolte et al. 2017). In this framework we provide a step by step proof recipe combining continuous dynamical system analysis together with discrete stochastic approximations in the lines of (Benaim et al. 2005). From an empirical point of view the algorithm shows promising results on popular benchmark problems, as well as some appealing generalization properties.

11:30
Learning step sizes for unfolded sparse coding
PRESENTER: Pierre Ablin

ABSTRACT. Sparse coding is typically solved by iterative optimization techniques, such as the Iterative Shrinkage-Thresholding Algorithm (ISTA). Unfolding and learning weights of ISTA using neural networks is a practical way to accelerate estimation. In this paper, we study the selection of adapted step sizes for ISTA. We show that a simple step size strategy can improve the convergence rate of ISTA by leveraging the sparsity of the iterates. However, it is impractical in most large-scale applications. Therefore, we propose a network architecture where only the step sizes of ISTA are learned. We demonstrate that for a large class of unfolded algorithms, if the algorithm converges to the solution of the Lasso, its last layers correspond to ISTA with learned step sizes. Experiments show that our method is competitive with state-of-the-art networks when the solutions are sparse enough.

12:00
General Risk Measures for Robust Machine Learning
PRESENTER: Henri Gérard

ABSTRACT. A wide array of machine learning problems are formulated as the minimization of the expectation of a convex loss function on some parameter space. Since the probability distribution of the data of interest is usually unknown, it is often estimated from training sets, which may lead to poor out-of-sample performance. In this work, we bring new insights in this problem by using the framework which has been developed in quantitative finance for risk measures. We show that the original min-max problem can be recast as a convex minimization problem under suitable assumptions. We discuss several important examples of robust formulations, in particular by defining ambiguity sets based on phi-divergences and the Wasserstein metric. We also propose an efficient algorithm for solving the corresponding convex optimization problems involving complex convex constraints. Through simulation examples, we demonstrate that this algorithm scales well on real data sets.

11:00-13:00 Session 2C: Applications : Telecommunications, II
Location: A1.140
11:00
Modeling transmission link's degradation caused by weather phenomena - the special case of FSO networks
PRESENTER: Marinela Shehaj

ABSTRACT. In this work we consider communication networks sensitive to weather conditions such as Free Space Optics (FSO). The network optimization problem we study is how to dimension the network links at the lowest cost, and at the same time assure the traffic demand satisfaction at an acceptable level for all weather conditions. To deal with this problem we have proposed a mathematical formulation together with an optimization approach. Such a model can be achieved using robust optimization techniques, and for that it is important to find a tractable way of characterizing possible link (capacity) degradation states corresponding to weather conditions not known in advance. We will show how weather records can be translated to vectors of link capacity available (after degradation) in particular weather states, and how the resulting link degradation states may be represented mathematically in a compact and tractable way to be exploited in optimization. To solve the second task we will make use of a generalization of a combinatorial problem of finding a minimum hitting set to deduce a compact set approximating a given set of link degradation states. This investigation is the subject of our current research, which will be-part of the presentation to PGMODays 2019.

11:30
Optimizing Battery Usage for a Telecommunications Company Participating in a Curtailing Market

ABSTRACT. Using batteries as backup in case of power outages is common in telecommunications companies since they provide critical services and need to keep their services always online. The company may also use those batteries to participate in the energy market. Indeed, since the energy prices vary over time, batteries can be used to cover the energy demands when these prices are high. Another energy market for which the use of batteries can be profitable is the Curtailing Market. In this context, the company can be incited to cut down its consumption by receiving a reward in exchange (i.e. performing a curtailing) when the total demand is larger than the production. The battery management while respecting the backup usage rules and the ones imposed by the energy market is a key aspect to keep the network safe and also to improve the revenue for the company. In the case of a mono-battery system, the battery can be viewed as a power stock and its management treated as a particular production planning problem with specific rules on the inventory. A polynomial time algorithm is proposed to optimize the management of the battery, providing its optimal usage while respecting the imposed rules.

12:00
Sequential Learning in Resource Allocation Games as Path Planning Problems with Side-Observations
PRESENTER: Dong Quan Vu

ABSTRACT. Resource allocation games are used to model a vast range of practical problems. Particularly, two of the most renowned are the Colonel Blotto game (CB game) and the Hide-and-Seek game (HS game). These games have been profoundly studied but only limited to the one-shot and full-information version. In most of the applications, it is more natural to consider the repeated game with incomplete information model; however, it is not trivial how one can efficiently learn in this online version of the games due to their extremely large search-space. In this work, we focus on the regret-minimization analysis of the problem where a learner plays repeatedly a CB game (a HS game) and receives semi-bandit feedback at the end of each stage. We first show that the CB games and HS games can be cast as path planning problems with side-observations (SOPPP), an important instance of the combinatorial bandit. Our second contribution is to propose a novel algorithm for solving SOPPPs, as a variant of the classical EXP3 algorithm, that obtains important improvements in the regret guarantees and running time comparing to the state-of-the-art. We illustrate these improvements by analyzing the use of our algorithm in the CB and HS games.

12:30
Optimizing Network Investments and Designing of Offers for a Telecommunication Operator
PRESENTER: Adrien Cambier

ABSTRACT. The exponential traffic growth of mobile traffic pushes telecommunication companies to expand their network through important investments, hence satisfying the request of their subscribers in speed and volume. However, excessive or useless investments over the time horizon should be avoided. As a service provider, operators can use subsidies to avoid unnecessary investments. This has led to the design of mobile master plans, whose modeling under a Mixed Integer Linear formulation has been studied in our previous works. This modeling integrates both subscriber and network investments. They consider two types of network investments: densification and coverage extension by installing the new technology. These three types of investments are jointly optimized at each period of the time-horizon while satisfying capacity and strategical guidelines constraints. We provide a MILP for the mobile master plan optimization problem able to tackle new sites installation for 5G context and taking into account sites coverage overlapping. We linearize the model and reinforce it with several valid inequalities and adapt the heuristics from our previous works. We prove the NP-hardness of our formulation, even in the case of a single generation, single module and mono period framework, by a reduction to the set covering problem.

11:00-13:00 Session 2D: Global Optimization I
Location: A1.134
11:00
Expressiveness, Robustness and Stability of Landscape Features
PRESENTER: Quentin Renau

ABSTRACT. Exploratory Landscape Analysis aims at measuring the characteristics of an optimization problem through a set of numerical values called, with the hope to use these indicators for an automated selection and/or configuration of a suitable solver. ELA has been used successfully applied to a broad range of classical optimization problems, and forms state of the art in a rapidly growing number of disciplines. However, there are various elementary aspects on ELA that are far from being well understood. Our worked addresses four basic properties: -expressiveness: How well does a feature discriminate between different optimization problems? -stability: How robust is the feature value against the sampling strategy? -influence of sample size: Does the number of sample points influence the feature value? We analyze these questions on 7 out of the 17 features sets covered by the flacco package. Our test bed are the 24~ noiseless BBOB functions. To answer the second we compare three different sampling methods: uniform iid sampling, Latin Hypercube Sampling and Sobol' sequences, and take into account the randomness of these generators. We show that the impact of the sample generator is much more important than anticipated.

11:30
Unconstrained nonlinear relaxations in global optimization

ABSTRACT. The global optimization of mixed-integer nonlinear programs is an ambitious goal, with many practical applications. Here, we consider the broadest class of such problems, where no convexity or regularity assumptions are made. We only require that an expression graph of the problem be given.

The main approach to solve these generic problems is the reformulation convexification technique. It uses a symbolic reformulation to express the problem in a form that facilitates the implementation of various tools like convex relaxations and bound tightening techniques. Many successful global solvers follow this approach and make use of decades of engineering in mixed-integer linear programming for robustness and efficiency.

We now know that the hybridation with nonlinear programming can lead to substantial improvements, even if linear relaxations remain the default setting in most solvers. We describe the implementation of a global optimization solver that can use both linear and nonlinear relaxations. Even if the dynamic approach dominates, we show that nonlinear relaxations can be competitive by themselves. For that purpose, we implemented a nonlinear optimization algorithm specifically designed to solve our nonlinear relaxations. Its success is based on a duality result and on augmented lagrangian relaxations, both leading to unconstrained relaxations of the problem.

12:00
Exact Verification of Piecewise Affine Dynamical Systems

ABSTRACT. This work addresses a verification problem for a class of discrete-time piecewise affine systems. The verification problem considered here can be reduced to an optimization problem whose the objective function is convex and quadratic and the constraints set is the reachable values set of the system. The method proposed permits to solve exactly the optimization problem introducing piecewise quadratic Lyapunov functions.

12:30
HYBRIDIZING SIMULATED ANNEALING WITH RADIAL BASIS FUNCTION NETWORKS MODEL FOR GLOBAL OPTIMIZATION PROBLEMS***CANCELLED****

ABSTRACT. In most cases, many problems in several fields involve the determination of global optimum of multidimensional function with a great number of local optima. Despite their contribution in terms of efficiency, the existing methods still reveal a major handicap to escape the trap of the local optimum.This paper presents a new scenario of hybridizing the global and stochastic metaheuristics, Simulated Annealing (SA), with a Radial Basis Function Networks Model (RBFNM) to deal such problems. The proposed approach, called Enhanced Simulated Annealing (ESA), aims to take advantage of the power of neuron networks in terms of optimization. Supervised learning is applied to build a network model that can simulate the objective function. Our goal is to provide a new tool to both improve the solution quality and avoid premature convergence of SA. It is a low-level relay hybridization since the RBNFM is incorporated in the SA algorithm instead of the neighborhood process. A comparison between SA and BNFNM is performed to show the efficiency of the new approach applied on some standard test optimization functions known as multi-dimensional and with several local optima. Despite the cost in terms of computing time, the results are encouraging and promising in terms of convergence.

13:00
L0-Regularized Huber Loss Function - ****CANCELLED****
PRESENTER: Mustafa C. Pinar

ABSTRACT. We study the structure of local and global minimizers of the piecewise quadratic Huber loss function regularized with an L0-norm term for sparsity. The Huber loss function is frequently utilized in robust statistics and engineering applications. We exhibit necessary and sufficient conditions for strict local minimizers. We prove that the set of global minimizers is not empty, and investigate conditions for the sparsity of global minimizers.

11:00-13:00 Session 2F: Optimization of energy
Location: A1.133
11:00
Optimization of the power mix and the remuneration of the electric vehicles with VGI facilites

ABSTRACT. Porto Santo, a Portuguese island in Madeira region develops a strategy on electric vehicles with intelligent and reversible recharge. Recharge system forecasts would take into account the island driving conditions and drivers behaviour as well as meteorologic characteristics, since a storage system supporting the power grid service will allow to expand intermittent renewable energies sources. Nowadays, power supply is provided mainly by thermal power plants with 85% of the annual production and about 15% by photovoltaic and wind power units. Both resources, solar and wind, may be exploited for immediate consumption when meteorologic conditions correspond to customers demand and for storage and delayed restitution when production exceeds demand.

We build an optimization model for the power system considering different remuneration modes for car owners. The integer programming model is solved for a typical period with an hourly time step and different intermittent REn production levels. In the objective function we introduce the cost of the power units and the household electricity demand is given.

Data from the local transmission system operator EEM are used for this model (year 2017). The optimal distribution of these resources involving economic, behavioral and stochastic parameters are analysed considering the level of car batteries remuneration.

11:30
Quantifying the value of flexibility: demand response versus storage

ABSTRACT. Intermittent sources of energy represent a challenge for electrical networks, particularly regarding demand satisfaction at peak times. Energy management tools such as load shaving or storage systems can be used to mitigate abrupt variations in the network. The value of different mechanisms to move energy through time is determined by a multi-objective programming approach, that aims at minimizing operating costs as well as carbon emissions. By combining the output with sensitivity theory, the new technique builds a three-dimensional Pareto front, in which the usual information on indifference costs is accompanied by a third dimension that provides the decision maker with a quantitative measure to evaluate the relative impact of the two conflicting objectives. The interest of the methodology is assessed on three instances representing typical configurations in Brazil, Germany and France, respectively corresponding to a system that is hydro-dominated, thermo-dominated, and with a balanced mix of hydro and thermal power.

12:00
Economical assessment of different usage of microgrid battery storage in the UK market context

ABSTRACT. Due to carbon reduction policies share of renewable energies has increased. One of the emerging solutions for handling the intermittency of these resources is development of the Microgrids with battery storages. The battery storage could be used to reduce the electricity bill or to gain revenue from ancillary services. The objective of this project is to conduct an economical study on commercial microgrid with onsite PV production and battery storage located in the UK, to find the most beneficial usage of the battery. Three use cases are defined : 1. Optimization of electricity consumption 2. Optimization of revenue of ancillary services 3. Optimization of electricity consumption and ancillary service revenue at the same time The ancillary services considered in this project are Firm Frequency Response (FFR) and Short-term Operating Reserve (STOR), operated through an aggregator of flexibilities. The saving of each case is analyzed based on one year data and the potential interaction between the microgrid and the aggregator is discussed. The result of this study shows that the most beneficial use-case is using the battery for optimizing electricity consumption and providing FFR service at the same time while using the battery only for ancillary services is not profitable.

12:30
Optimal Design of a District Cooling System Design by Mixed Integer Linear Programming
PRESENTER: Bingqian Liu

ABSTRACT. In a district cooling system, chillers convert electricity to cooling power which is distributed to the buildings in the district. Designing such a system involves choosing the type and number of chillers to be installed and the ice storage capacity. These decisions should consider not only the construction costs over a multi-year horizon, but also the operation costs of the system during its whole lifetime. To accurately compute these operation costs, a detailed schedule describing the on/off status and load of the chillers on a hourly basis should be built for the entire horizon. This optimization problem can be formulated as a mixed-integer linear program (MILP). However, its huge size makes it intractable for current MILP solvers. We thus consider a deployment plan involving a limited number of years and use a clustering approach to select a small set of typical and extreme days to represent the various conditions under which the system will be operated. The resulting MILP is then solved by a customized Branch & Cut algorithm exploiting the hierarchical relationship between the construction and operation variables. We provide preliminary computational results based on a real-life case study in China.

11:00-13:00 Session 2G: Decomposition methods in Stochastic Optimization
Location: A1.128
11:00
SMS++: a Structured Modelling System for Optimization

ABSTRACT. In an effort to provide decision-makers with tools to manage complex systems, highly intricate mathematical models have to be constructed and solved. They have to represent the system under analysis with an appropriate degree of accuracy, capturing phenomena at time resolutions spanning from decades to minutes, accounting for different forms of uncertainty description. A hierarchical combination of heterogeneous decomposition methods, together with specialized solution methods for specific sub-structures of the model, is crucial to make these huge-scale optimization problems tractable. As the currently available modelling and solving tools do not offer adequate support for fully exploiting the structure present in these models, dealing with them is a colossal task in terms of programming. The aim of the open-source Structured Modeling System++ (SMS++) is to facilitate the implementation of general and flexible algorithms for optimization problems with several nested layers of structure. We will present the current state of the SMS++ system, focussing on recent developments concerning the representation of uncertainty, which are meant to facilitate the transformation of complex deterministic models into stochastic ones. Specific SMS++ facilities for "general-purpose decomposition techniques" will then allow to efficiently produce multi-level decomposition approaches to these problems.

11:30
Stochastic lot-sizing problem with remanufacturing: a dual dynamic decomposition approach
PRESENTER: Franco Quezada

ABSTRACT. We consider a multi-item multi-echelon lot-sizing problem within a remanufacturing system involving three production echelons: disassembly, refurbishing and reassembly. We aim at optimizing the production planning, which involves making decisions about the production level, the timing and the resources, in order to meet customers' demand in the most cost-effective way.

We assume a stochastic environment and we propose a multi-stage stochastic programming approach. We first investigate a stochastic dynamic programming formulation, which allows decomposing the problem into a series of single-node subproblems. We then reformulate the obtained sub-problems by using a binary approximation of the continuous state variables to use the SDDiP algorithm proposed in [2] to solve the problem. We also study an approximate version of the SDDiP algorithm in which a cutting-plane generation phase is carried out to approximate the expected cost-to-go functions. The proposed formulation provides a starting point for the application of such decomposition methods to production planning problems. Additionally, our numerical results show that the proposed method is capable to obtain optimal, or near-optimal, solutions in practicable computation times for large-size instances, showing its applicability to real-world optimization problems. These results can be found in [1].

12:00
A decomposition method by component for the optimization of maintenance scheduling
PRESENTER: Thomas Bittar

ABSTRACT. In order to improve the performance of its hydroelectric fleet, EDF seeks to optimize the maintenance scheduling of the components of hydropower plants. We address an idealized maintenance optimization problem. We study a system of several physical components (turbines, alternators, generators) sharing a common stock of spare parts. Components experience random failures that occur according to known failure distributions. We seek a deterministic preventive maintenance strategy that minimizes the expected cost depending on maintenances and forced outages of the system. The interaction prediction principle is used to decompose the original large-scale optimization problem into a sequence of independent subproblems of smaller dimension. Each subproblem consists in optimizing the maintenance on a single component. The resulting algorithm iteratively solves the subproblems with a blackbox algorithm and coordinates the components. The maintenance optimization problem is a mixed-integer problem. However decomposition methods are based on variational techniques, it is therefore needed to relax the dynamics of the system and the cost functions. Relaxation parameters have an important influence on the optimization and must be appropriately chosen. We apply the decomposition method on relaxed systems with up to 80 components. In high dimension it outperforms the blackbox algorithm applied directly on the original problem.

12:30
Scenario and Stage Decompositions for Planning Energy Investment under Uncertainty ***Cancelled***

ABSTRACT. We consider multistage risk-averse stochastic programming models for the generation expansion planning problem for energy systems with here-and-now investment decisions. The resulting problem is coupled both along scenarios and stages. An important algorithm for dealing with this kind of problems is the Progressive Hedging Algorithm (PHA), which decomposes the problem by scenarios. For every iteration and scenario, the PHA requires solving a quadratic problem (QP) that, depending on the number of stages, can be itself a large scale optimization problem. To overcome this difficulty, we equip the PHA with a stage decomposition yielding relatively more, but significantly smaller QPs.

11:00-13:00 Session 2H: Robust Optimization
Location: A1.122
11:00
Anchor-Robust Solutions for the Resource-Constrained Project Scheduling Problem

ABSTRACT. The Resource-Constrained Project Scheduling Problem (RCPSP) is considered in the case where jobs have uncertain processing times. A robust optimization approach is proposed to compute a baseline schedule with a bounded makespan, while taking uncertainty into account. The objective of the Anchor-Robust RCPSP is to maximize the number of so-called anchored jobs, i.e., jobs with guaranteed starting times. We investigate the Anchor-Robust RCPSP for budgeted uncertainty by providing MIP formulations and heuristics. We illustrate how the obtained solutions can be used in a decision-aiding tool.

11:30
Constrained Multi-Objective Optimization under Uncertainty with Multi-fidelity Approximations
PRESENTER: Mickaël Rivier

ABSTRACT. We present here a novel method to tackle multi-objective optimization under uncertainty problems. In particular, the aim is to handle both robust and reliability-based optimization problems, where robustness and reliability measures can be estimated with tunable fidelity. Some previous works rely on the concept of Bounding-Box. In these contributions, robustness and reliability measures are approximated by a Bounding-Box (or conservative box), which is roughly a uniform-density representation of the unknown objectives and constraints. It is supplemented with a surrogate-assisting strategy, which is very effective to reduce the overall computational cost, notably during the last optimization iterations. We propose in this work an extension of the previous method, allowing for objects other than Bounding-Boxes. Such non-uniform approximations have been proposed in previous works. Among others, sampling and Gaussian measure approximations are presented and quantitatively compared. We give extensive insights on joint samplings construction and propose suitable Pareto dominance rules and POP (Pareto Optimal Probability) computations for these new measure approximations. The method is assessed on several algebraic test-cases in terms of convergence rate and robustness. Finally, it is applied to the shape optimization of an Organic Rankine Cycle (ORC) turbine.

12:00
Online Computation with Untrusted Advice
PRESENTER: Shendan Jin

ABSTRACT. The advice model of online computation captures the setting in which an online algorithm is given some partial information concerning the request sequence. This paradigm allows to establish tradeoffs between the amount of this additional information and the performance of the online algorithm. However, unlike real life in which advice is a recommendation that we can chose to follow or to ignore based on trustworthiness, in the current advice model, the online algorithm typically treats it as infallible. This means that if the advice is corrupt or, worse, if it comes from a malicious source, the algorithm may perform poorly.

In this work, we study online computation in a setting in which the advice is provided by an untrusted source. Our objective is to quantify the impact of untrusted advice so as to design and analyze robust online algorithms that are resilient and perform well even when the advice is generated in a malicious, adversarial manner. We show how the new paradigm can be applied to well-studied online problems such as ski rental, online bidding, bin packing, and list update.

12:30
Bayesian Optimization Under Uncertainty for Chance Constrained Problems

ABSTRACT. Chance constraint is an important tool for modeling the reliability on decision making in the presence of uncertainties. Indeed, the chance constraint enforces that the constraint is satisfied with high probability. In addition, we consider that the objective function is affected by uncertainties. This problem is challenging since modeling a complex system under uncertainty can be expensive and for most real-world stochastic optimization will not be computationally viable. In this talk, we propose a Bayesian methodology to efficiently solve such class of problems. The central idea is to use Gaussian Process (GP) models together with appropriate acquisition functions to guide the search for an optimal solution. We first show that by specifying a GP prior to the objective function, the loss function becomes tractable. Similarly, using GP models for the constraints, the probability satisfaction can be efficiently approximated. Subsequently, we introduce new acquisition functions to iteratively select the points to query the expensive objective and constraint functions. Finally, we present numerical examples to validate our approach compared to benchmark results.

11:00-13:00 Session 2I: Recent Advances in Solving Large-Scale Optimal Power Flow Problems

Invited session organized by Julie Sliwak and Miguel Anjos

Location: A1.139
11:00
A Tight-and-Cheap Relaxation for the ACOPF Problem
PRESENTER: Miguel Anjos

ABSTRACT. Computational speed and global optimality are a key need for pratical algorithms of the optimal power flow (OPF) problem. Recently, we proposed a tight-and-cheap conic relaxation for the ACOPF problem that offers a favourable trade-off between the second-order cone and the semidefinite relaxations for large-scale meshed networks in terms of optimality gap and computation time. In a recent paper, we show theoretically and numerically that this relaxation can be exact and can provide a global optimal solution for the ACOPF problem. Also, numerical results on PGLib-OPF test cases with up to 588 buses show that the tight-and-cheap relaxation on average dominates the quadratic convex relaxation.

11:30
Improving Clique Decompositions of Semidefinite Relaxations for Optimal Power Flow Problems
PRESENTER: Julie Sliwak

ABSTRACT. The Alternating Current Optimal Power Flow (ACOPF) is a challenging problem due to its significant nonconvexity, which means that only local optimality can be reached by first-order KKT conditions. Semidefinite relaxations are a powerful tool to prove global optimality: they provide tight lower bounds useful for global optimization approaches. Small-to-medium-sized Semidefinite Programming (SDP) problems are efficiently solved by interior point state-of-the-art solvers. Clique decomposition techniques enable the resolution of some larger sparse problems. However, there are many different ways to compute clique decompositions. In this presentation, we show that different clique decompositions are not equivalent in terms of resolution time by comparing different chordal extension algorithms on RTE and other MATPOWER public datasets. Moreover, we show theoretically that applying clique decomposition techniques to SDP relaxations of ACOPF formulated in complex variables is not equivalent to applying such techniques to the same problems formulated in real variables. We also compare the computational performance of reformulations coming from the complex SDP and from the real SDP problem. Finally, we propose a new clique combination algorithm that improves the computational performance of a given decomposition by adding more edges to the chordal extension.

12:00
Solving Alternative Current Optimal Power Flow to Global Optimality with Quadratic Reformulation Using Semi-Definite Programming and Branch-and-Bound
PRESENTER: Hadrien Godard

ABSTRACT. Alternative Current Optimal Power Flow (ACOPF) is naturally formulated as a non-convex problem. In that context, solving (ACOPF) to global optimality remains a challenge when classic convex relaxations are not exact. We use semidefinite programming to build a quadratic convex relaxation of (ACOPF). We show that this quadratic convex relaxation has the same optimal value as the classical semidefinite relaxation of (ACOPF) which is known to be tight. In that context, we build a spatial branch-and-bound algorithm to solve (ACOPF) to global optimality that is based on a quadratic convex programming bound. An advantage of our approach is that we capture the strength of semidefinite programming bounds into a quadratic convex program that is known to be faster to solve.

12:30
A Difference-of-Convex Approach for Optimal Power Flow Problems with Probability Constraints
PRESENTER: Paul Javal

ABSTRACT. Handling uncertainties in power system operation leads to challenging Optimal Power Flow (OPF) problems combining nonconvexities and stochasticities \cite{ref}. In this work, we consider an OPF problem in which the uncertainties are modeled using a probability constraint. For numerical tractability, we approximate the nonconvex and hard-to-evaluate probability function by a Difference-of-Convex function that is easily assessed via Monte-Carlo simulation. The resulting Difference-of-Convex model for the chance-constrained OPF is handled by a specialized proximal bundle method.

11:10-12:50 Session 3: IRSDI review session, part II
Location: A1.116
11:10
Improvement of condition-based maintenance for a guaranteed level of service on a fleet of equipment
PRESENTER: Antoine Grall

ABSTRACT. Condition-based maintenance aims at considering the current condition of systems or devices in maintenance decision-making. It allows to optimize maintenance resources by performing maintenance work only when needed. This work deals with a fleet of equipments which are deteriorating with usage. The equipments have to fulfill several types of missions with different priorities and requirements. Resources dedicated to maintenance are limited and the main aim is to improve the maintenance strategy to ensure a minimal global availability of the whole fleet at any time. Each device is modeled by its lifetime which is described as the sum of two Weibull distributed random variables. The first one represents the time the system is in the nominal working state. The second one corresponds to the deteriorated working state. The sole knowledge on the device state is related to an alarm indicating the change from nominal to degraded state. Dynamic maintenance approaches including device conditions are considered with successive maintenance planning on finite time horizon related to missions durations. Parametric decision rules based on each component remaining useful lifetime are considered. They are compared numerically using Monte Carlo simulation.

11:35
Design of optimal maintenance strategies using piecewise deterministic Markov processes

ABSTRACT. This aim of this project is to propose a maintenance policy which performance is close to the optimal maintenance cost for a multi-component equipment subject to random degradation and failures. We first propose a generic model for the dynamics of such equipments based on piecewise deterministic Markov processes (PDMPs). The maintenance optimisation problem consists in selecting the maintenance dates and operations in order to minimize some cost while keeping a high level of availability. This translates into an impulse control problem for PDMPs. The optimal performance is called the value function of the problem. It can be obtained by iterating some dynamic programming operator. We implemented an approximation procedure for the value function on a four-component model. Then we proposed to design a feasible maintenance policy that generates a cost close to the optimal one. In this talk, we will present and discuss the numerical results we obtained for the four-component model.

12:00
Principal Component Analysis for a Bundle of Functional Time Series and its Application in Prediction

ABSTRACT. Modelling and forecasting a bundle of functional time series arises in a broad spectral of real applications. It is a challenging task even when the number of time series is moderately large. In this paper, we explore the principal component analysis for a bundle of functional time series (ts-PCA) in the sense that we transform original series into new low-dimensional subseries. Those subseries are uncorrelated both contemporaneously and serially, and hence can be analyzed separately. Technically it involves an eigenanalysis for a positive definite matrix. Simulation study shows that this proposed method is effective in revealing the true underlying low-dimensional structure. Application to daily electric load datasets is also reported.

12:25
Prediction of building epoch from unconstrained images using deep learning: an expert level?

ABSTRACT. We present first results on a study that aims at evaluating the benefits of deep-learning based image interpretation approaches for smart building renovation decision-aided systems. Indeed, street-view building pictures are a rich source of information that seems to be of high interest in the evaluation of the interest of building renovation actions. In this context, we focus, here, on a first task that consists into the automatic epoch estimation of individual houses from unconstrained pictures of their external views.

12:30-13:00 Session 4: Optimization for Machine Learning
Location: Amphi 2
12:30
Towards closing the gap between the theory and practice of SVRG

ABSTRACT. SVRG is one the first stochastic variance-reduced methods for solving the empirical risk minimization. Is is based on an inner-outer loop structure: a reference full gradient is evaluated in the outer loop, after which m steps of an inner loop are executed where the reference gradient is used to build a variance-reduced estimate of the current gradient. Yet there is a significant gap between the parameter settings that the analysis suggests and what is known to work well in practice.

In particular, the current analysis shows that m should be of the order of the condition number. However, setting m to the number of data points works well in practice. Furthermore, the inner iterates of the current analysis must be reset using averaging after every outer loop. Yet in practice SVRG works best when they are updated continuously and not reset. We provide an analysis of these practical settings achieving the same complexity.

We also provide a more general analysis by using arbitrary sampling, allowing all forms of mini-batching. Thus, we can set the parameters such as the mini-batch and step sizes in such a way that produces a more efficient algorithm in practice, as we show in numerical experiments.

13:00-14:15Lunch
14:15-15:45 Session 5A: Shape Optimization and Applications part I

Invited session organized by Alex Ferrer. This session is devoted to new advances on shape optimization. The topics of the session range from theoretical aspects to numerical developments of industrial interest.

Location: Amphi 1
14:15
A consistent relaxation of optimal design problems for coupling shape and topological derivatives
PRESENTER: Samuel Amstutz

ABSTRACT. I will present a general procedure for approximating a `black and white' shape and topology optimization problem in linear elasticity with a density optimization problem, allowing for the presence of `grayscale' regions. The construction relies on a regularizing operator for smearing the characteristic functions involved in the exact optimization problem, and on an interpolation profile, which endows the intermediate density regions with fictitious material properties. In particular, this framework includes the classical SIMP model.

Under mild hypotheses on the smoothing operator and on the interpolation profile, we prove that the features of the approximate density optimization problem (material properties, objective function, etc.) converge to their exact counterparts as the smoothing parameter vanishes. Notably, the Fr\'echet derivative of the approximate objective functional with respect to the density function converges to either the shape or the topological derivative of the exact objective, depending on whether it is evaluated at the boundary of the domain or in its interior. These results shed new light on the connections between these two different notions of sensitivities for functions of the domain and on the construction of consistent interpolation schemes. Related algorithms will also be discussed.

14:45
TOPOLOGY OPTIMIZATION IN BRITTLE FRACTURE MECHANICS USING LEVEL-SETS
PRESENTER: Jeet Desai

ABSTRACT. The laws governing brittle fracture, proposed by A.A. Griffith have been reformulated as minimization of a mechanical energy. The reformulation renders the fracture problem in a variational form. A damage gradient model governs the progressive degradation in solids. Such a model can be effectively used to approximate the mechanical energy and thus the variational form. The variational form is however an inequation.

Shape or topology optimization of structures using level-sets is a well-known technique where a scalar function describing a shape implicitly is advected along a shape derivative. The shape derivative can be easily computed using the Cea's technique. However this approach demands a variational equation.

In order to apply Cea's technique to the fracture model, we propose a penalization model approximating the variational inequation as a variational equation. The computation of the shape derivative paves the way for an iterative algorithm devoted to minimize the fracture or the elastic energy. In our numerical implementation, the shape evolution is driven by a body-fitted discretization strategy. Numerical examples in 2D and 3D will be proposed to show the efficiency of the method.

15:15
Topology optimization of support structures for thermo-mechanical constraints in powder bed additive manufacturing
PRESENTER: Martin Bihr

ABSTRACT. In metal additive manufacturing, especially with the SLM (Selective Laser Melting) technology, support structures are often needed to ensure the manufacturability of the part. We use topology optimization, relying on the level set method, to obtain optimal support structures with a mathematical model and optimization constraints that modelize the main manufacturing constraints. Manufactured parts will complement our numerical results.

14:15-15:45 Session 5B: Stochastic Optimization and Machine Learning
Location: Amphi 2
14:15
Online Restart Strategies for SGD
PRESENTER: Scott Pesme

ABSTRACT. SGD is a central key of many algorithms in machine learning. However practical implementations do not reflect what theory suggests. Though Polyak-Ruppert averaging with polynomially decaying step sizes achieves the optimal statistical rate, most practitioners use piecewise-constant learning rates without averaging. We focus on proposing and analysing an online statistic which indicates a good time to restart the SGD algorithm with a reduced step-size.

Several online statistics with cheap computational cost have been proposed in order to detect the iterates' saturation. One of them was proposed by Pflug in 1983 and is based on the running average of successive gradient inner products. This algorithm is still at the center of recent discussions. With a detailed analysis of Pflug's statistic in the least squares setting, we show that this procedure cannot lead to an adequate convergence diagnostic.

We propose a novel algorithm based on the evolution of the euclidean distance between the current iterate and the iterate from which we have last restarted our SGD recursion. This quantity is computationally cheap to calculate and is directly linked to the function we want to minimize. We provide experimental results showing satisfactory convergence rates on synthetic and real datasets.

14:45
Bregman Proximal Methods Revisited for Solving Variational Inequalities with Singularities ***Cancelled***

ABSTRACT. Motivated by several applications in machine learning and imaging we study a class of variational inequalities associated to possibly unbounded and non-Lipschitz continuous operators. In contrast to the classical counterparts, these problems could exhibit singularities near the boundary of the feasible domain, and hence precluding the use of standard first-order methods. To circumvent this difficulty, we introduce some more general classes of operators appropriately tailored to the singularity landscape, and we derive Bregman proximal methods that exploit this structure and provide optimal convergence rate.

15:15
An adaptive stochastic optimization algorithm for resource allocation
PRESENTER: Xavier Fontaine

ABSTRACT. We consider the classical problem of sequential resource allocation where a decision maker must repeatedly divide a budget between several resources, each with diminishing returns. This can be recast as a specific stochastic optimization problem where the objective is to maximize the cumulative reward, or equivalently to minimize the regret. We construct an algorithm that is adaptive to the complexity of the problem, expressed in term of the regularity of the returns of the resources, measured by the exponent in the Łojasiewicz inequality (or by their universal concavity parameter). Our parameter-independent algorithm recovers the optimal rates for strongly-concave functions and the classical fast rates of multi-armed bandit (for linear reward functions). Moreover, the algorithm improves existing results on stochastic optimization in this regret minimization setting for intermediate cases.

14:15-15:45 Session 5C: The Advent of Prosumers: Challenges and Opportunities

Invited session organized by Miguel Anjos

Location: A1.140
14:15
Artificial Intelligence and optimal policies for prosumer integration

ABSTRACT. The emergence of small-scale electricity prosumers may lead to various scenarios: prosumers may increase the variability of total net load and eventually disconnect from the grid. Alternatively, they may remain connected and provide flexibility for the overall system. Current policy may promote disconnection. Indeed, the commodity cost in the typical electricity bill is modest, other costs compensates grid services and pays for environmental initiatives[1]. The integration of high numbers of small-scale prosumers is an important challenge. Its success relies on the energy policy, and policy suitability can be tested prior to its real-life implementation. We propose a computational agent-based framework to represent an electricity system composed of heterogeneous smart stakeholders (prosumers, distribution companies, grid operator). The decision-making model is a key for the agents’ intelligence. While the decision-making behaviour of some agents can be formulated as a single-objective optimization model (minimizing the cost of energy procurement), the behaviour of other agents appears to be more complex. The grid operator actions will depend on the decisions of prosumers who have negligible individual effect but collectively may have greater impact. Deep Learning may be used to capture these undesirable load fluctuations and propose corrective policies to mitigate their effect.

14:45
Pleiad: A Python Framework to Optimize Residential Power Consumption

ABSTRACT. Demand response is a strong lever enabling to achieve more flexibility in energy systems, especially nowadays as optimization-based techniques for scheduling energy consumption are getting more significance. In our work, we generalized an existing MILP model to a general class of energy systems in which an arbitrarily large network binds several objects (appliances, batteries, PV modules, ...). We developed a Python framework called Pleiad, enabling to schedule the energy consumption of such a network by optimizing a custom cost function with a customizable MILP model. Pleiad provides tools for network instantiation, designing the rules governing the creation of the MILP model but also for the visualization of results. We also implemented a library of appliances whose technical specifications derive from industrial products or from the scientific literature. The library contains existing grid tariffs and tools to simulate the output of PV production as well. A small case study was performed, in which we studied the flexibility potential of a dwelling equipped with a battery, PV modules and with the appliances of the library.

15:15
Peer-to-peer Electricity Market Analysis: from Variational to Generalized Nash Equilibrium
PRESENTER: Paulin Jacquot

ABSTRACT. We consider a network of prosumers involved in peer-to-peer energy exchanges, with differentiation price preferences on the trades with their neighbors, and we analyze two market designs: (i) a centralized market, used as benchmark, where a global market operator optimizes the flows (trades) between the nodes, local demand and flexibility activation to maximize the system overall social welfare; (ii) a distributed peer-to-peer market design where prosumers in local energy communities optimize selfishly their trades, demand, and flexibility activation. We first characterize the solution of the peer-to-peer market as a Variational Equilibrium and prove that the set of Variational Equilibria coincides with the set of social welfare optimal solutions of market design (i). We give several results that help understanding the structure of the trades at an equilibrium or at the optimum. We characterize the impact of preferences on the network line congestion and renewable energy surplus under both designs. We provide a reduced example for which we give the set of all possible generalized equilibria, which enables to give an approximation of the price of anarchy. We provide a more realistic example which relies on the IEEE 14-bus network, for which we can simulate the trades under different preference prices.

14:15-15:45 Session 5D: Graphs and Optimization
Location: A1.134
14:15
Spectral Bounds For Graph Partitioning Problems
PRESENTER: José Neto

ABSTRACT. Given an undirected edge-weighted graph and a positive integer k, the maximum k-cut problem consists in finding a partition of the node set of the graph into k subsets so as to minimize the sum of the weights of the edges having their endpoints in different subsets of the partition. A new class of bounds for this problem is introduced, extending previous results on the maximum (2-)cut problem, and we show that they can strictly improve over other eigenvalue bounds from the literature. The variant of the problem when the subsets of the partition have prescribed sizes is also considered, and we present a new class of similar bounds for the latter.

For both problems, the expressions of the new bounds involve the eigenvalues of the weighted adjacency matrix as well as its eigenvectors through given geometric parameters that are related to the maximum (2-)cut problem.

Computational results illustrating the potential impact of the new bounds are reported.

14:45
On the minimum $s-t$ cut problem with budget constraints
PRESENTER: Aissi Hassene

ABSTRACT. We consider in this presentation the budgeted version of the minimum $s-t$ cut problem. Let $G=(V,A)$ be a directed graph with two distinguished nodes $s$ and $t$, and consider $k$ non-negative cost functions $c^1,\ldots,c^k:A \leftarrow \mathbb{Z}_+$, and $k-1$ budget bounds $b_1, \ldots,b_{k-1}$ where $k$ is a constant. The goal is to find a $s-t$ cut $C$ satisfying budget constraints $c^h(C) \leqslant b_h$, for $h = 1,\ldots,k-1$, and whose cost $c^k(C)$ is minimum. We study the linear relaxation of the problem and give necessary and sufficient conditions for which it has an integral optimal basic solution. We also give a strongly polynomial time combinatorial algorithm for solving it. As a corollary, we obtain a $(1,k)$-pseudo-approximation algorithm for the problem.

15:15
New Results for the Flow Shop problem with Conflict Graph

ABSTRACT. This paper addresses the problem of scheduling jobs in a flow shop subject to the constraint that some conflicting jobs cannot be processed simultaneously on different machines. In the context of our study, these conflicts are given by a simple and undirected graph, called the conflict graph. The problem of minimizing the maximum completion time (makespan) is known to be NP-hard in the strong sense. We prove that the permutation schedules are not dominant even for two machines, and we present a special case for which an optimal schedule can be found by a permutation schedule. On the other hand, We propose Mixed Integer Linear Programming (MILP) models, a Mixed Integer Quadratically Constrained Program (MIQCP) and a set of matheuristics. We further provide the results of extensive computational experiments performed on randomly generated instances to test the efficiency of the proposed approaches.

14:15-15:45 Session 5E: Optimal control
Location: A1.133
14:15
Tensor Decomposition for High-dimensional Hamilton-Jacobi-Bellman Equations
PRESENTER: Dante Kalise

ABSTRACT. This talk showcases ongoing research within the PGMO project TIDAL: Taming the Curse of Dimensionality in Dynamic Programming. A tensor decomposition approach for the solution of high-dimensional, fully nonlinear Hamilton-Jacobi-Bellman equations arising in optimal feedback control and estimation of nonlinear dynamics is presented. The proposed method combines a tensor train approximation for the value function together with a Newton-like iterative method for the solution of the resulting nonlinear system. The effectiveness of tensor approximations circumvents the curse of dimensionality, solving Hamilton-Jacobi equations with more than 100 dimensions at modest cost. The linear scaling of the computational complexity with respect to the dimension allows to solve optimal feedback control problems over high-dimensional state spaces. The synthesis method is effectively applied to the optimal feedback stabilization of nonlinear parabolic PDEs.

14:45
The logic of containing tumors
PRESENTER: Yannick Viossat

ABSTRACT. Many cancer treatments are administered at the maximal tolerated dose, and eventually fail as the tumor becomes resistant. An alternative is to use the minimal dose that allows containing the tumor. The hope is to diminish selection for resistance to treatment, allowing us to control the tumor for a longer time. Experimental results are promising. We study how to maximize the time at which tumor size becomes higher than a given threshold in simple models with two type of cells, sensitive and fully resistant to treatment. We find that under very general conditions, containment strategies are optimal, or almost so. However, the clinical gains with respect to aggressive treatments crucially depends on the tumor growth model. Finally, if resistant cells retain some sensitivity to treatment, the optimal strategy is to first contain the tumor with a low-dose treatment but then switch to maximal tolerated dose. This is the opposite of what other authors have found. We will explain why.

14:15-15:45 Session 5F: Bilevel Programming and Applications in the Energy field

Invited session organized by Luce Brotcorne and Miguel Anjos

Location: A1.128
14:15
A Multi-Leader-Follower Game for Demand-Side Management

ABSTRACT. The energy domain currently faces many challenges of various natures (economical, ecological, political...). At the center of the preoccupations, the supply-demand balance in the grid has to be ensured at all times. Whereas the old paradigm consisted in adapting the production to the demand, the converse method has been on the rise for the past thirty years: adapting the demand to the production is known as demand-side management (DSM). Several techniques exist to implement DSM: we present here a model for load shifting, in which energy suppliers offer prices to a set of clients who shift their consumption in order to minimize their bill and the inconvenience due to the shifts. Mathematically, this takes the form of a multi-leader-follower game (MLFG). Whereas bilevel optimization problems usually model the interaction between one leader and one follower, MLFGs not only consider the hierarchical interaction between followers and leaders, but also the interactions among followers and among leaders. In this talk, we explain why handling such problems is difficult, and present a way to obtain feasible solutions to our DSM problem.

14:45
The Rank Pricing Problem with Ties

ABSTRACT. The Rank Pricing Problem (RPP) is an optimization problem which aims at setting the prices of a series of products of a company so as to maximize its revenue taking into account the customers' purchasing decision, which is modelled by means of a budget and a ranked list of preferred products. We present a generalization of the RPP, namely the Rank Pricing Problem with Ties (RPPT), in which we consider that ties in the preferences of the customers may occur. To model the RPPT, we present a formulation based on class representatives, an idea already used for the vertex coloring problem. To tackle its resolution, we also thoroughly introduce a reformulation of the problem based on Benders Decomposition.

15:15
Bilevel optimization applied to strategic pricing in electricity markets and extension to markets with massive entry of renewable energies and distributed generation
PRESENTER: Daniel Pereda

ABSTRACT. In this work, we present a bilevel programming formulation associated with a generator strategically choosing a bid to maximize profit given the choices of the other generators. More precisely, given a specific generator, we define a set of scenarios for the remaining agents and maximize the expected profit of the chosen company over all scenarios. The capability of the agent represented by the leader to affect the market price is considered by the model. The follower of the problem is the electric system operator, which runs a minimum cost program that respects physical network constraints. In this work, no transmission constraints and convex piecewise linear functions as cost and bids were considered. A penalty algorithm is formulated together with an efficient algorithm for solving the follower problem, and convergence to a local maximum is proven.

The bilevel formulation is also compared with the Nash equilibrium formulation, and it is shown that if the probabilities of the scenarios approach are close to those of the mixed strategies equilibrium, then the expected payoffs under both formulations are close.

These ideas are extended and applied to the case where we have massive entry of renewable energies and distributed generation.

15:45
Strategic bidding in Price Coupled Regions - ****CANCELLED****

ABSTRACT. The classical Unit Commitment problem (UC) can be described as the problem of establishing the energy output of a set of generation units over a time horizon, in order to satisfy a demand for energy, while minimizing the cost of generation. Traditional models for the UC assume that the net demand for each period is known in advance. However, in practice, the demand is dictated by the amounts that can be sold by the producer at given prices on the day-ahead market.

Our aim is to model and solve the UC problem with a second level of decisions ensuring that the produced quantities are cleared at market equilibrium. In their simplest form, market equilibrium constraints are equivalent to the first-order optimality conditions of a linear program. The UC in contrast is usually linearized into a mixed integer program. Taking a similar approach, we are faced to a bilevel optimization problem where the first level is a MIP and the second level linear.

In this talk, we present a reformulation of the bilevel problem considered into a MIP using first-order optimality conditions of the second level and presenting valid inequalities. We then derive some heuristics to obtain solutions close to optimality.

14:15-15:45 Session 5G: Probabilistic Constraints, Risk and Robust Optimization
Location: A1.122
14:15
On facing demand forecast uncertainties by means of Robust Optimization

ABSTRACT. Air Separation Units (ASUs) extract oxygen, nitrogen and argon from the air. The first challenge of an ASU operator is to satisfy the demand while this demand can vary a lot in the presence of uncertainties. Hence, it is of paramount importance for them to cope efficiently with the unshrinkable forecast errors on the clients consumptions. Indeed, while forecast can be improved in a certain extent, the range of the uncertainty can be captured for preventing unexpected behaviours with a proper level of risk mitigation measure. To this end, the so-called Robust Optimization approach is applied to the ASUs production planning, with regards to their clients consumptions.

14:45
Constraint Generation for Risk Averse Two-Stage Stochastic Programs
PRESENTER: Roberto Minguez

ABSTRACT. A significant share of stochastic optimization problems in practice can be cast as two-stage stochastic programs. If uncertainty is available through a finite set of scenarios, which frequently occurs, and we are interested in accounting for risk aversion, the expectation in the recourse cost can be replaced with a worst-case function (i.e., robust optimization ) or another risk-functional, such as conditional value-at-risk. In this paper we are interested in the latter situation especially when the number of scenarios is large. For computational efficiency, we suggest an algorithm which will rather build on a (clustering and) constraint generation methodology. This strategy was efficiently made to work in the case wherein the risk-functional is a worst-case operation, but under special structure. In the two algorithms presented in this paper, typical iterations will increasingly add sets of constraints until the required accuracy is achieved. Both algorithms rely on a dynamic scenario aggregation technique. We are concerned with both general and fixed recourse situations, the latter implying that the duals of all second-stage linear problems share the same feasible set. We establish convergence of these two algorithms and demonstrate their effectiveness through various numerical experiments.

15:15
Optimal reserve sizing for frequency regulation
PRESENTER: Adrien Le Franc

ABSTRACT. Reserve mechanisms are activated when a frequency deviation reports power disequilibrium on the electric grid. Delivering reserve power is attractive for en- ergy storage owners, especially when combining other domestic usages such as energy arbitrage or peak shaving. We present a model for optimally sizing power reserves for a microgrid equipped with a battery and managing local electricity consumption and production.

We focus on a daily problem where we want to compute both optimal hourly reserves and a minute-scale power management policy. In the first place, reserves are bidded day-ahead, which introduces an open-loop decision variable. Second, power is managed during the day to minimize the operation cost of the microgrid which includes penalties if reserves are not delivered when significant frequency deviations are observed. This results in a second stage problem formulated as a multistage stochastic problem.

Our main concerns relate to the time decomposition of the problem which involves several time scales and information structures, and to the nature of the stochastic process for modeling frequency deviations for which we propose a robust approach. Our work is inspired by real world industrial challenges and data reported by Schneider Electric.

14:15-15:45 Session 5H: Black-Box Optimization
Location: A1.116
14:15
Self-Adjusting Mutation Rates with Provably Optimal Success Rules
PRESENTER: Carola Doerr

ABSTRACT. The one-fifth success rule is one of the best-known and most widely accepted techniques to control the parameters of evolutionary algorithms. While it is often applied in the literal sense, a common interpretation sees the one-fifth success rule as a family of success-based updated rules that are determined by an update strength $F$ and a success rate $s$. We analyze in this work how the performance of the (1+1) Evolutionary Algorithm (EA) on LeadingOnes depends on these two hyper-parameters. Our main result shows that the best performance is obtained for small update strengths $F=1+o(1)$ and success rate $1/e$. We also prove that the running time obtained by this parameter setting is asymptotically optimal among all dynamic choices of the mutation rate for the (1+1) EA, up to lower order error terms. We show similar results for the resampling variant of the (1+1) EA, which enforces to flip at least one bit per iteration.

14:45
Towards Dynamic Algorithm Selection
PRESENTER: Anja Jankovic

ABSTRACT. Black-box optimization of a previously unknown problem can often prove to be a demanding task -- one must recognize the nature of the problem at hand and choose the algorithm exhibiting the best performance for that type of problem. The problem characterization is done via underlying fitness landscape features, which allow to identify similarities and differences between problems. Our ambition is to extend classical algorithm selection techniques, mostly covered in literature from an offline, i.e., static perspective, to the online setting, in which it will be possible to switch between different algorithms depending on the part of the optimization process. Here we present first steps towards an adaptive landscape analysis. This talk is based on the paper presented at this year’s GECCO conference. Our approach is aimed at taking a closer look into how landscape features evolve during the optimization process and whether this information can be used to discriminate between different problems. The motivation of our work is to understand if and how one could exploit the information provided by the features to improve on dynamic algorithm selection. Put differently, our goal is to leverage landscape analysis to adjust the choice of the algorithm on the fly.

15:15
An Improved Generic Bet-and-Run Strategy with Performance Prediction for Stochastic Local Search

ABSTRACT. A commonly used strategy for improving optimization algorithms is to restart the algorithm when it is believed to be trapped in an inferior part of the search space. Building on the recent success of Bet-and-Run approaches for restarted local search solvers, we introduce a more generic version that makes use of performance prediction. It is our goal to obtain the best possible results within a given time budget t using a given black-box optimization algorithm. In over 157 million experiments, we test different approaches. We show (1) that the so-called ``currentBest'' method is indeed a very reliable and robust baseline approach, and (2) that our approach can yield better results than the previous methods.

14:15-15:45 Session 5I: Machine Learning in Optimization and problem solving
Location: A1.139
14:15
Algorithmic configuration by learning and optimization

ABSTRACT. We propose a methodology, based on machine learning and optimization, for selecting a solver configuration for a given instance. First, we employ a set of solved instances and configurations in order to learn a performance function of the solver. Secondly, we solve a mixed-integer nonlinear program in order to find the best algorithmic configuration based on the performance function.

14:45
Learning to Price: Structured Learning to scale up Column Generation

ABSTRACT. We introduce a structured learning approach to approximate a difficult path problem on an acyclic digraph by the usual shortest path problem. The approach takes in input a database of instances and solutions of the difficult path problem of interest, and returns a mapping that transforms any instance of the difficult path problem into an instance of the usual shortest path problem that approximates well the initial instance. To that purpose, we introduce a maximum likelihood technique to train a structured prediction function which uses a shortest path problem as prediction problem.

This provides a generic technique to derive an efficient heuristic from a black-box solver that can handle only instances of moderate size of a path problem of interest: We use the black-box solver to generate a database of instance and solutions and train our structured learning method. (Large) new instances can then be handled by solving their approximation by a usual shortest path problem using standard algorithms.

Since path problems play an important role as pricing subproblems of column generation approaches, we also introduce matheuristics that leverage our approximations in that context. Numerical experiments show their efficiency on two stochastic vehicle scheduling problems.

15:15
Deep learning numerical methods for Hamilton Jacobi equations - ****CANCELLED****

ABSTRACT. In this work we use deep neural networks to approximate solutions of some first order Hamilton Jacobi equations derived from non linear deterministic optimal control problems.

Following recent approaches (for example \cite{ref1}, \cite{ref2}, \cite{ref3} and \cite{ref4}), our algorithm removes the direct dependency between the state space dimension and the resolution complexity.

We will illustrate our method on some optimal control problems with state constraints.

15:45-16:15Coffee
16:15-17:45 Session 6A: Shape Optimization and Applications part II

Invited session organized by Alex Ferrer.

Location: Amphi 1
16:15
Null Space Gradient Flows for Constrained Optimization with Applications to Shape Optimization
PRESENTER: Florian Feppon

ABSTRACT. A gradient flow algorithm is proposed for solving generic equality or inequality constrained optimization problems set on Hilbert spaces. Our ultimate goal is non parametric shape optimization, for which more classical optimization methods are often difficult to use because of the infinite dimensionality or the need for tuning algorithm parameters. We rely on a variant of classical gradient flows for equality constrained problems: the search direction is a combination of a null space step and a range space step, which are aimed to reduce the value of the minimized objective function and the violation of the constraints, respectively. Inequality constraints are specifically addressed by solving a dual quadratic programming subproblem of size the number of active or violated constraints, which allows to detect the subset of these to which the optimization trajectory needs to remain tangent. We then extend the method to quite general optimization sets equipped with a suitable manifold structure, and notably to sets of shapes as it occurs in shape optimization with the framework of Hadamard's boundary variation method. The numerical efficiency and ease of the method are demonstrated on more realistic shape optimization problems.

16:45
Coupled optimization of both structure and mechanical connections (location and number)

ABSTRACT. One of the issues for the automotive industry is weight reduction. For this purpose, topology optimization is used for mechanical parts and usually involves a single part. Its connections to other parts are assumed to be fixed. We propose here a coupled topology optimization of both the structure of a part and its connections (location and number) to other parts. Connection models (rigid support and long standard bolt with pre-stressed state) are idealized to be enough representative and cost-effective whereas the structure, represented by a level-set function, is more realistic. A coupled optimization of the structure and the location of rigid supports is performed to minimize the volume of an engine accessories bracket under a compliance constraint. The structure is optimized with Hadamard's boundary variation method. The positions of the rigid supports are optimized with a parametric gradient-based algorithm. Thereafter, the concept of topological derivative is adapted to create an idealized bolt at the best location. This method is based on an asymptotic expansion that expresses the sensitivity of an objective function with respect to the creation of a small idealized bolt. This approach is implemented with a 3d academic test case for a problem of compliance minimization.

17:15
Robust topology optimization of nanophotonic devices
PRESENTER: Nicolas Lebbe

ABSTRACT. We are interested in the shape and topology optimization of nanophotonic devices using the Hadamard shape derivative and level-set methodologies.

Our studies have led us to take into account physical uncertainties in the models in order to obtain robust devices according to environmental parameters such as the incident wavelength of the laser, the ambient temperature or uncertainties concerning the geometry manufactured by the lithography-etching process.

Mathematically we are interested in a worst-case shape optimization problem of the form "max min J(Omega,delta)" where the maximum is sought over all the admissible shapes Omega (subsets of R^3) and the minimum is used to refer to the worst-case value of the objective functional J according to an uncertain parameter delta.

In nanophotonics, J(Omega,delta) usually refer to the output power of a component and its evaluation requires the computation of the solution to the time-harmonic vector wave PDE which depends on the shape Omega.

The results obtained by applying a multi-objectives optimization method to the resolution of this worst-case shape optimization problem will be presented.

16:15-17:45 Session 6B: Machine Learning and Numerical Methods
Location: Amphi 2
16:15
Sinkhorn Divergences for Unbalanced Optimal Transport

ABSTRACT. We will introduce in this presentation an extension the formulation of Sinkhorn divergences to the unbalanced setting of arbitrary positive measures, providing both theoretical and algorithmic advances. Sinkhorn divergences leverage the entropic regularization of Optimal Transport (OT) to define geometric loss functions. They are differentiable, cheap to compute and do not suffer from the curse of dimensionality, while maintaining the geometric properties of OT, in particular they metrize the weak$^*$ convergence. Extending these divergences to the unbalanced setting is of utmost importance since most applications in data sciences require to handle both transportation and creation/destruction of mass. This includes for instance problems as diverse as shape registration in medical imaging, density fitting in statistics, generative modeling in machine learning, and particles flows involving birth/death dynamics. Our first set of contributions is the definition and the theoretical analysis of the unbalanced Sinkhorn divergences. They enjoy the same properties as the balanced divergences (classical OT). Indeed, we show that they are convex, differentiable and metrize the weak$^*$ convergence. Our second set of contributions studies generalized Sinkhorn iterations, which enable a fast, stable and massively parallelizable algorithm to compute these divergences.

16:45
Newton-Conjugate Gradient Methods with Complexity Guarantees
PRESENTER: Clément Royer

ABSTRACT. There has been a recent surge of interest for finding local minima of nonconvex functions, mostly due to the outbreak of such instances in data science. In this setting, one aims at developing algorithms with suitable worst-case complexity properties. However, endowing an algorithm with a complexity analysis often requires to depart from the standard version of such a method. This can prevent from efficiently implementing or even testing these variants, in spite of their theoretical appeal.

In this talk, we consider a classical, practical approach in large-scale optimization, where the (linear) conjugate gradient algorithm is incorporated in a Newton-type method. To equip this scheme with a complexity analysis, we revisit the convergence theory of the conjugate gradient algorithm when applied to a nonconvex quadratic function. We also leverage randomized linear algebra techniques to allow for second-order complexity results at a suitable cost. By incorporating these tools within our Newton-type framework, we are able to match the guarantees of recently proposed algorithms based on accelerated gradient. Finally, we illustrate the performance of our strategies on several nonconvex problems.

17:15
Interior point methods for logistic regression
PRESENTER: Rubing Shen

ABSTRACT. Logistic regression is a well known statistical model used to predict binary outcomes. Fitting logistic regression models reduces to the minimization of an empirical loss penalizing the gap between the values predicted by the statistical model and the observations. The corresponding optimization problem formulates naturally as a (unconstrained) non-linear problem. In this talk, we study the resolution of logistic regression problems with a commercial interior point solver, Artelys Knitro. We first provide numerical studies showing that Knitro behaves well to fit logistic regression models, both with $\ell_1$ and $\ell_2$ penalties. We compare Knitro with the solver L-BFGS-B, currently used in the library Scikit-Learn. In the second part of the talk, we focus on the optimization of regularization hyperparameters. We suppose given a cross-validation evaluation function, based onto a validation dataset. By using the implicit function theorem, we derive the sensivity of the cross-validation loss function w.r.t. the penalty term, thus allowing to optimize it with a descent algorithm based on BFGS. Eventually, we depict some ideas to improve interior point algorithms for the resolution of optimization problems arising in machine learning.

16:15-17:45 Session 6C: Hydraulic Power Optimization
Location: A1.140
16:15
An extended LP formulation for the water pump scheduling problem
PRESENTER: Sophie Demassey

ABSTRACT. We present a new generic linear programming formulation of the pump scheduling problem in drinking water distribution networks, obtained by dualizing the time-coupling constraints of a standard compact MINLP formulation. By neglecting the impact of the pressure variation in the water tanks on the hydraulic balance of the network, the approximated LP can be quickly and entirely populated in a preprocessing step using a hydraulic simulator. We were thus able to derive near-optimal feasible solutions on benchmark instances in seconds where other approaches in the literature require minutes or hours.

16:45
A Lagrangian Decomposition for Hydro Unit Commitment Problem With Two Reservoir

ABSTRACT. Scheduling power production in water valleys is a critical problem solved daily by electrical power companies around the world. This is a large-scale problem that has strong non-linear and combinatorial characteristics. Also, many water valleys have cascading reservoirs, which increases the complexity of planning. In this work, we addressed the problem of optimizing short-term power generation planning, called the Hydro Unit Commitment Problem, considering two cascading reservoirs. We propose a Lagrangian Decomposition for the problem, which allows us to obtain the solution for each reservoir separately, as subproblems. This work is an extension of a work in the literature, which presents a study on the single-reservoir version of the problem and shows that this problem is reduced to the Constrained Shortest Path Problem. In our decomposition, we use the monotone reformulation and apply a labeling algorithm to the subproblem of the first reservoir. We also simplified the subproblem of the second reservoir to the Shortest Path Problem. In this case, the water bounds constraints were relaxed in decomposition.

17:15
Evaluation of the Impact of the Power Production Function Approximation on Hydropower Maintenance Scheduling***Cancelled***
PRESENTER: Miguel Anjos

ABSTRACT. Maintenance planning for hydropower plants is a crucial problem. In this paper, we evaluate the impact of the Hydropower Production Function formulation on the maintenance scheduling. Based on an existing model for Generator Maintenance Scheduling that uses a convex hull approximation for representing the Hydropower Production Function, we developed two additional models. The first one uses piecewise linear approximations and the second approach consists at using a polynomial function fitted on real data. We compare these three approximations using a real-world hydroelectric complex and two sets of data, one generated using the dynamic programming algorithm that simulate the power output, and the other produced by taking the convex envelope of the first set. The results show that changing the approximation used shifts the scheduling of some maintenance tasks, and hence that maintenance schedules may be noticeably impacted by the approximation of the production function. For the hydroelectric complex we consider, the differences between schedules can represent up to 8.3 GWh per month.

16:15-17:45 Session 6D: Global Optimization II
Location: A1.134
16:15
Bayesian Multi-objective Optimization with Noisy Evaluations using the Knowledge Gradient
PRESENTER: Bruno Barracosa

ABSTRACT. We consider the problem of multi-objective optimization in the case where each objective is a stochastic black box that provides noisy evaluation results. More precisely, let f_1, ..., f_q be q real-valued objective functions defined on a search domain X ⊂ ℝ^d, and assume that, for each x ∈ X, we can observe a noisy version of the objectives: Z_1 = f_1(x) + ε_1, ..., Z_q = f_q(x) + ε_q, where the ε_i's are zero-mean random variables. Our objective is to estimate the Pareto-optimal solutions of the problem: min f_1, ..., f_q. We adopt a Bayesian optimization approach, which is a classical approach when the affordable number of evaluations is severely limited. In essence, Bayesian optimization consists in choosing a probabilistic model for the outputs Z_i and defining a sampling criterion to select evaluation points in the search domain X. Here, we propose to discuss the extension of the Knowledge Gradient approach of Frazier, Powell and Dayanik (INFORMS J. Comput., 21(4):599–613, 2009) to multi-objective problems with noisy evaluations. For instance, such an extension has been recently proposed by Astudillo and Frazier.

16:45
New Containers Scheduling Strategy based on a Multi-Criteria Decision Algorithm
PRESENTER: Tarek Menouer

ABSTRACT. This paper presents a new scheduling strategy proposed to optimize the scheduling of several containers submitted online by users in cloud. In the literature, there are several frameworks proposed to schedule containers on cloud such as Google Kubernetes, Docker SwarmKit and Apache Mesos. However, the majority of scheduling frameworks use strategies based on only one criterion. For example, the Docker SwarmKit (version v1.12) is based on Spread strategy. This strategy executes a container on the node having the least number of running containers. The Bin Packing strategy is also used as a scheduling strategy in the previous Docker Swarm version and in other studies. The principle of the Bin packing scheduling strategy consists in choosing the most compacted node regarding resources in order to reduce the total number of used nodes. In our context, the submitted containers and the state of nodes are defined in multi-criteria mode. The novelty of our scheduling strategy is to choose the node that executes a container by combining the Spread and the Bin Packing principles using the Technique for the Order of Prioritisation by Similarity to Ideal Solution (TOPSIS) multi-criteria algorithm. The experiments demonstrate the potential of our strategy in terms of performance gains.

17:15
Bayesian Optimization in Reduced Eigenbases
PRESENTER: David Gaudrie

ABSTRACT. Parametric shape optimization aims at minimizing f(x) where x is a large vector of CAD parameters, making the optimization difficult, especially when the use of surrogate-based approaches is mandatory. Most often, the set of considered CAD shapes resides in a manifold of lower dimension where it is preferable to perform the optimization. We uncover it through the PCA of a dataset of designs, mapped to a high-dimensional (D) shape space. Few (δ) axes (eigenshapes) allow to accurately describe the shapes. A Gaussian Process is fitted to the principal components. To emphasize the δ most important axes selected through a regularized maximum likelihood without entirely neglecting the D-δ remaining ones, the sum between a main-effect GP in a δ-dimensional input space and a coarse, isotropic GP in a high (D-δ)-dimensional space, which takes the effects of the less relevant eigenshapes into account, is considered. This GP requires only δ+3 hyperparameters and is tractable even with few observations. A redefinition of the Expected Improvement takes advantage of the space reduction to carry out the maximization in the smaller space of important eigenshapes, completed by a cheap maximization w.r.t the inactive ones through an embedding strategy. Its pre-image is the next evaluated design.

16:15-17:45 Session 6E: Control, Games and Variational Inequalities
Location: A1.133
16:15
Optimal Resource Allocation in Micro-organisms: the Dynamics of Photoacclimation in Microalgae as a Case Study (Oracle Project)**Cancelled**
PRESENTER: Francis Mairet

ABSTRACT. Given their key role in almost all ecosystems and in several industries, understanding and predicting microorganism growth is of utmost importance. In compliance with evolutionary principles, coarse-grained models of microbial growth can be used to determine optimal resource allocation scheme under dynamic environmental conditions. The key idea is to represent microorganism growth as an optimal control problem (OCP). In a previous work, we have determined using Pontryagin's maximum principle and numerical simulations the optimal strategy (involving a chattering arc) for a bacterial population facing a nutrient up-shift. This approach must now be validated with experiments. But fitting these models with data results in a complex bi-level optimization problem. As a toy example, we present here a model describing how microalgae acclimate to a change in light intensity. After the analytical investigation of the OCP by Pontryagin's maximum principle, we fit the model with experimental data. To do so, we use a classical parameter identification routine, calling at each iteration the bocop solver to solve the OCP. As a future work in the framework of the Oracle project, we aim to develop numerical tools to tackle more efficiently such a complex problem.

16:45
Quasi-Variational Inequality problems over product sets and applications to GNEPs
PRESENTER: David Salas

ABSTRACT. Quasi-variational inequalities are variational inequalities in which the constraint map depends on the current point. Due to this characteristic, specific proofs have been built to prove adapted existence results. Semicontinuity and generalized monotonicity are assumed and many efforts have been made in the last decades to use the weakest concepts. In the case of quasi-variational inequalities defined on a product of spaces, the existence statements in the literature require pseudomonotonicity of the operator, an hypothesis that is too strong for many applications, in particular in economics. Our aim in this work is to develop some specific existence results for Generalized Nash Equilibrium Problems (GNEP). These results are based on recent developments for quasi-variational inequalities defined on products of spaces and allows to tackle GNEP involving a bilevel structure, that is Multi-Leader-Follower Games. Applications to an energy management problem will be considered, thanks to the concept of Multi-Leader-Disjoint-Follower game.

17:15
Solving Perfect Information Mean Payoff Zero-sum Stochastic Games by Variance Reduced Deflated Value Iteration
PRESENTER: Omar Saadi

ABSTRACT. We introduce a deflated version of value iteration, which allows one to solve mean-payoff problems, including both Markov decision processes and perfect information zero-sum stochastic games. This method requires the existence of a distinguished state which is accessible from all initial states and for all policies; it differs from the classical relative value iteration algorithm for mean payoff problems in that it does not need any primitivity or geometric ergodicity condition. Our method is based on a reduction from the mean payoff problem to a discounted problem by a Doob h-transform, combined with a deflation technique and non-linear spectral theory results (Collatz-Wielandt characterization of the eigenvalue). In this way, we provide a new method Deflated Value Iteration that allows to extend complexity results from the discounted to the mean payoff case. In particular, Sidford, Wang, Wu and Ye developed recently an algorithm combining value iteration with variance reduction techniques to solve discounted Markov decision processes in sublinear time when the discount factor is fixed. We combine deflated value iteration with variance reduction techniques to obtain sublinear algorithms for mean payoff stochastic games in which the first hitting times of a distinguished state are bounded a priori.

16:15-17:45 Session 6F: Bilevel Programming and Applications, part II

Invited session, organized by Luce Brotcorne and Miguel Anjos

Location: A1.128
16:15
There's No Free Lunch: On the Hardness of Choosing a Correct Big-M in Bilevel Optimization
PRESENTER: Fränk Plein

ABSTRACT. One of the most frequently used approaches to solve linear bilevel optimization problems consists in replacing the lower-level problem with its Karush-Kuhn-Tucker (KKT) conditions and by reformulating the KKT complementarity conditions using techniques from mixed-integer linear optimization. The latter step requires to determine some big-M constant in order to bound the lower level's dual feasible set such that no bilevel-optimal solution is cut off. In practice, heuristics are often used to find a big-M although it is known that these approaches may fail. In this paper, we consider the hardness of two proxies for the above mentioned concept of a bilevel-correct big-M. First, we prove that verifying that a given big-M does not cut off any feasible vertex of the lower level's dual polyhedron cannot be done in polynomial time unless P = NP. Second, we show that verifying that a given big-M does not cut off any optimal point of the lower level's dual problem (for any point in the projection of the high-point relaxation onto the leader's decision space) is as hard as solving the original bilevel problem.

16:45
Near-Optimal Robust Bilevel Optimization

ABSTRACT. Bilevel optimization studies problems where the optimal response to a second mathematical optimization problem is integrated in the constraints. Such structure arises in a variety of decision-making problems in areas such as market equilibria, policy design or product pricing. We introduce near-optimal robustness for bilevel problems, protecting the upper-level decision-maker from bounded rationality at the lower level and show it is a restriction of the corresponding pessimistic bilevel problem. Essential properties are derived in generic and specific settings. This model finds a corresponding and intuitive interpretation in various situations cast as bilevel optimization problems. We develop a duality-based solution method for cases where the lower level is convex, leveraging the methodology from robust and bilevel literature. The models obtained are tested numerically using different solvers and formulations, showing the successful implementation of the near-optimal robust bilevel problem.

16:15-17:45 Session 6G: Sparse Optimization and Machine Learning
Location: A1.122
16:15
Handling correlated and repeated measurements with the smoothed Multivariate square-root Lasso
PRESENTER: Quentin Bertrand

ABSTRACT. Sparsity promoting norms are frequently used in high dimensional regression. A limitation of such Lasso-type estimators is that the optimal regularization parameter depends on the unknown noise level. Estimators such as the concomitant Lasso address this dependence by jointly estimating the noise level and the regression coefficients. Additionally, in many applications, the data is obtained by averaging multiple measurements: this reduces the noise variance, but it dramatically reduces sample sizes and prevents refined noise modeling. In this work, we propose a concomitant estimator that can cope with complex noise structure by using non-averaged measurements. The resulting optimization problem is convex and amenable, thanks to smoothing theory, to state-of-the-art optimization techniques that leverage the sparsity of the solutions. Practical benefits are demonstrated on toy datasets, realistic simulated data and real neuroimaging data.

16:45
The Geometry of Sparse Analysis Regularization
PRESENTER: Samuel Vaiter

ABSTRACT. Analysis sparsity is a common prior in inverse problem or linear regression. We study the geometry of the solution set (a polyhedron) of the analysis l1-regularization when it is not reduced to a singleton. Leveraging a fine analysis of the sublevel sets of the regularizer, we derive a connection between the sign pattern of a solution and the dimension of the smallest face containing it. Particularizing this result, we get on the hand an algebraic test to check whether a solution is extreme (that is an extreme point of the solution set). Such solutions are of interest in the theory of representer theorems. On the other hand, it gives the dimension of the solution set from the sign a maximal solution (that is a solution in the relative interior of the solution set), which can be determined numerically. We provide numerical examples on how to use these results.

17:15
Variational Formulations for the l0 Pseudonorm and Applications to Sparse Optimization
PRESENTER: Michel De Lara

ABSTRACT. In sparse optimization problems, one looks for solution that have few nonzero components. We consider problems where sparsity is exactly measured by the l0 pseudonorm. In this case, the Fenchel conjugacy fails to provide relevant analysis.

We display a new conjugacy, induced by a so-called coupling Capra (constant along primal rays), which depends on a general source norm, and we show that it is suitable for sparse optimization.

We show that the Capra conjugate of the characteristic function of the level sets of the l0 pseudonorm is a new norm, the generalized top-$k$ norm. With this, we display a lower bound convex program for the original (nonconvex) exact sparse optimization problem, which is a convex minimization program over the unit ball of a so-called generalized $k$-support norm.

We show that the Capra biconjugate of the l0 pseudonorm is equal to the l0 pseudonorm, when the source norm of the coupling Capra has some properties --- like orthant-strictly monotonic, rotundity. From this, we deduce variational formulas for the l0 pseudonorm. With these novel expressions for the l0 pseudonorm, we provide reformulations for exact sparse optimization problems, that contain convex parts, even if the original problem had no convex parts.

16:15-17:45 Session 6H: Complementariy problems and Variational problems
Location: A1.139
16:15
Polyhedral Newton-Min Algorithms for Complementarity Problems

ABSTRACT. The semismooth Newton method is a very efficient approach for computing a zero of a large class of nonsmooth equations. When the initial iterate is sufficiently close to a regular zero and the function is strongly semismooth, the generated sequence converges quadratically to that zero, while the iteration only requires to solve a linear system.

If the first iterate is far from a zero, it is difficult to force its convergence because a semismooth Newton direction may not be a descent direction of the associated least-square-merit-function, unlike when the function is differentiable. We explore this question in the particular case of a nonsmooth equational reformulation of the nonlinear complementarity problem, using the minimum function. We propose a globally convergent algorithm using a modification of a semismooth Newton direction definition that makes it a descent direction of the least-square function. Instead of requiring that the direction satisfies a linear system, it must be a feasible point of a convex polyhedron; hence, it can be computed in polynomial time. This polyhedron is defined by the often very few inequalities, obtained by linearizing pairs of functions that have close negative values at the current iterate. [...] Global convergence to regular points is proved.

16:45
A new strategy for solving nonlinear complementarity problems arising in thermodynamics of compositional multiphase mixtures
PRESENTER: Duc Thach Son Vu

ABSTRACT. In this work, we propose a new method to solve difficult nonlinear complementarity problems arising in realistic applications. Indeed, unified formulations using complementarity conditions have recently emerged as a promising way for the handling of the appearance and disappearance of phases in porous media multiphase compositional flows. From a mathematical point of view and after discretization, this leads to systems of equations combining algebraic equations and nonlinear complementarity conditions. Such systems gives rise to major convergence difficulties for standard smooth or semi-smooth Newton-like methods. This led us to design a new approach inspired by optimization interior point methods. We propose a technique avoiding any parameter management while ensuring good theoretical convergence results validated by numerous numerical tests. We present extensive numerical tests and several comparisons to classical methods.

17:15
Error estimate for TV denoising

ABSTRACT. We consider the ROF denoising problem and discretize it on a grid of square pixels of size h. To do so, one can chose several discretizations of the continuous total variation. It has been observed that the classical so called isotropic total variation over-weights the 45 degrees diagonal leading to an exaggerate blur in that direction. We show that this blur is of order h^(2/3). As a comparison we re-establish an estimate of order h for an other choice of discrete total variation based on Raviart-Thomas finite elements.

16:15-17:15 Session 6I: Mixed integer programming
Location: A1.116
16:15
A Graph-based Heuristic for Variable Selection in Mixed Integer Linear Programming
PRESENTER: Marc Etheve

ABSTRACT. Branch and Bound (B&B) and its variants have become the most commonly used algorithms to solve Mixed Integer Linear Programming (MILP). State-of-the-art solvers rely on generic heuristics guiding especially variable selection and node selection.

We propose a new branching heuristic based on a graph representation of a MILP. This heuristic can be seen as an approximation of Influence Maximization in the corresponding graph. Alternatively, it is equivalent to clustering variables by K-means in a low-dimensional space when considering a MILP as a scatter of points.

The proposed heuristic is tested on EDF's energy planning problems and seems to make structural branching decisions. However, it is not invariant to reformulation.

16:45
Mixed Integer Optimization Using Piecewise Linear Function Fitting and JULIA-based library

ABSTRACT. We present an efficient algorithm able to over-estimate/under-estimate/approximate any ar- bitrary univariate nonlinear derivable function by a non necessarily continuous piecewise linear function that minimizes the number of linear segments with a guaranteed tolerance. The algorithm is based on the piecewise linear bounding method recently proposed by [Ngueveu 2018]. The two main methodological contributions of this paper are (i) a generalization of the approach to a larger class of tolerance types than the absolute and relative tolerances from Ngueveu, and (ii) a reformulation technique allowing any approximation problem of convex or concave func- tion with any tolerance type that preserves concavity, to be reduced to fitting a piecewise linear function within a bounded corridor. The core of the JULIA library proposed is an efficient implementation of such piecewise linear function fitting. The resulting software is used to solve certain classes of MINLP problems with linear constraints and a non-linear objective function. Computational results on nonlinear functions approximation benchmark [Rebennack and Krasko, 2019] and on network design problems with congestion [Paraskevopoulos et al, 2016] show that state-of-the-art MINLP solvers and other MILP-based solution methods from the literature are outperformed.

17:15-17:45 Session 7: Fuzzy function methods in optimization
Location: A1.116
17:15
Optimality of a multi-variable fuzzy function

ABSTRACT. This article introduces a new idea of differentiability for fuzzy functions of fuzzy variables. Specifically, we define a first and second order derivative of a fuzzy function $\tilde{f}: (F(\mathbb{R}))^{n} \to F(\mathbb{R})$, where $(F(\mathbb{R}))^{n}$, is the set of all vectors of fuzzy numbers. In the process, we analyse algebra of derivatives of the considered fuzzy functions. Using the proposed differentiability notion, we prove a necessary and sufficient condition for optimality to obtain a non-dominated solution of a fuzzy optimisation problem. Several illustrative examples are given to support the introduced ideas.