MFCS 2023: 48TH INTERNATIONAL SYMPOSIUM ON MATHEMATICAL FOUNDATIONS OF COMPUTER SCIENCE
PROGRAM FOR THURSDAY, AUGUST 31ST
Days:
previous day
next day
all days

View: session overviewtalk overview

08:30-09:00Coffee Break
09:00-10:00 Session 13

Invited Talk

Location: Amphi F
09:00
Online Algorithms with Predictions
10:00-10:30Coffee Break
10:30-12:10 Session 14A: Operational Search
Location: Amphi F
10:30
Competitive Search in the Line and the Star with Predictions

ABSTRACT. We study the classic problem of searching for a hidden target in the line and the $m$-ray star, in a setting in which the searcher has some prediction on the hider's position. We first focus on the main metric for comparing search strategies under predictions; namely, we give positive and negative results on the consistency-robustness tradeoff, where the performance of the strategy is evaluated at extreme situations in which the prediction is either error-free, or adversarially generated, respectively. For the line, we show tight bounds concerning this tradeoff, under the untrusted advice model, in which the prediction is in the form of a $k$-bit string which encodes the responses to $k$ binary queries. For the star, we give tight, and near-tight tradeoffs in the {\em positional} and the {\em directional} models, in which the prediction is related to the position of the target within the star, and to the ray on which the target hides, respectively. Last, for all three prediction models, we show how to generalize our study to a setting in which the performance of the strategy is evaluated as a function of the searcher's desired tolerance to prediction errors, both in terms of positive and inapproximability results.

10:55
The Covering Canadian Traveller Problem Revisited

ABSTRACT. In this paper, we consider the k-Covering Canadian Traveller Problem (k-CCTP), which can be seen as a variant of the Travelling Salesperson Problem. The goal of k-CCTP is finding the shortest tour for a traveller to visit a set of locations in a given graph and return to the origin. Crucially, unknown to the traveller, up to k edges of the graph are blocked and the traveller only discovers blocked edges online at one of their respective endpoints. The currently best known upper bound for k-CCTP is O(√k) which was shown in [Huang and Liao, ISAAC ’12]. We improve this polynomial bound to a logarithmic one by presenting a deterministic O(log k)-competitive algorithm that runs in polynomial time. Further, we demonstrate the tightness of our analysis by giving a lower bound instance for our algorithm.

11:20
Graph Connectivity with Noisy Queries

ABSTRACT. Graph connectivity is a fundamental combinatorial optimization problem that arises in many practical applications, where usually a spanning subgraph of a network is used for its operation. However, in the real world, links may fail unexpectedly deeming the networks non-operational, while checking whether a link is damaged is costly and possibly erroneous. After an event that has damaged an arbitrary subset of the edges, the network operator must find a spanning tree of the network using non-damaged edges by making as few checks as possible.

Motivated by such questions, we study the problem of finding a spanning tree in a network, when we only have access to noisy queries of the form “Does edge e exist?”. We design efficient algorithms, even when edges fail adversarially, for all possible error regimes; 2-sided error (where any answer might be erroneous), false positives (where “no” answers are always correct) and false negatives (where “yes” answers are always correct). In the first two regimes we provide efficient algorithms and give matching lower bounds for general graphs. In the False Negative case we design efficient algorithms for large interesting families of graphs (e.g. bounded treewidth, sparse). Using the previous results, we provide tight algorithms for the practically useful family of planar graphs in all error regimes.

11:45
Exact and approximation algorithms for routing a convoy through a graph
PRESENTER: Martijn van Ee

ABSTRACT. We study routing problems of a convoy in a graph, generalizing the shortest path problem (SPP), the travelling salesperson problem (TSP), and the Chinese postman problem (CPP) which are all well-studied in the classical (non-convoy) setting. We assume that each edge in the graph has a length and a speed at which it can be traversed and that our convoy has a given length. While the convoy moves through the graph, parts of it can be located on different edges. For safety requirements, at all time the whole convoy needs to travel at the same speed which is dictated by the slowest edge on which currently a part of the convoy is located. For Convoy-SPP, we give a strongly polynomial time exact algorithm. For Convoy-TSP, we provide an O(log n)-approximation algorithm and an O(1)-approximation algorithm for trees. Both results carry over to Convoy-CPP which—maybe surprisingly—we prove to be NP-hard in the convoy setting. This contrasts the non-convoy setting in which the problem is polynomial time solvable.

10:30-12:10 Session 14B: Cuts and Graphs
Location: Amphi G
10:30
Dichotomies for Maximum Matching Cut: H-Freeness, Bounded Diameter, Bounded Radius
PRESENTER: Felicia Lucke

ABSTRACT. The (Perfect) Matching Cut problem is to decide if a graph G has a (perfect) matching cut, i.e., a (perfect) matching that is also an edge cut of G. Both Matching Cut and Perfect Matching Cut are known to be NP-complete, leading to many complexity results for both problems on special graph classes. A perfect matching cut is also a matching cut with maximum number of edges. To increase our understanding of the relationship between the two problems, we introduce the Maximum Matching Cut problem. This problem is to determine a largest matching cut in a graph. We generalize and unify known polynomial-time algorithms for Matching Cut and Perfect Matching Cut restricted to graphs of diameter at most 2 and to (P_6 + sP_2)-free graphs. We also show that the complexity of Maximum Matching Cut differs from the complexities of Matching Cut and Perfect Matching Cut by proving NP-hardness of Maximum Matching Cut for 2P3-free quadrangulated graphs of diameter 3 and radius 2 and for subcubic line graphs of triangle-free graphs. In this way, we obtain full dichotomies of Maximum Matching Cut for graphs of bounded diameter, bounded radius and H-free graphs.

10:55
MaxCut Above Guarantee
PRESENTER: Ivan Bliznets

ABSTRACT. In this paper, we study the computational complexity of the Maximum Cut problem parameterized above guarantee. Our main result provides a linear kernel for the Maximum Cut problem in connected graphs parameterized above the spanning tree. This kernel significantly improves the previous $O(k^5)$ kernel given by Madathil, Saurabh, and Zehavi [ToCS 2020]. We also provide subexponential running time algorithms for this problem in special classes of graphs: chordal, split, and co-bipartite. We complete the picture by lower bounds under the assumption of the ETH. Moreover, we initiate a study of the Maximum Cut problem above $\frac{2}{3}|E|$ lower bound in tripartite graphs.

11:20
Finding a Highly Connected Steiner Subgraph and its Applications

ABSTRACT. Given a (connected) undirected graph $G$, a set $X \subseteq V(G)$ and integers $k$ and $p$, the STEINER SUBGRAPH EXTENSION problem asks if there exists a set $S \supseteq X$ of at most $k$ vertices such that $G[S]$ is a $p$-edge-connected subgraph. This problem is a natural generalization of the well-studied STEINER TREE problem (set $p=1$ and $X$ to be the terminals). In this paper, we initiate the study of STEINER SUBGRAPH EXTENSION from the perspective of parameterized complexity and give a fixed-parameter algorithm (FPT algorithm) parameterized by $k$ and $p$ on the graphs of bounded degeneracy. Besides being an independent advance on the parameterized complexity of network design problems, our result has some natural applications. In particular, we use our main result to obtain new single-exponential FPT algorithms for several vertex-deletion problems studied in the literature, where the goal is to delete a smallest set of vertices such that: (i) the resulting graph belongs to a specified hereditary graph class, and (ii) the deleted set of vertices induces a $p$-edge-connected subgraph graph of the original graph.

12:10-14:00Lunch Break
14:00-15:15 Session 15A: Online Algorithms
Chair:
Location: Amphi F
14:00
The Online Simple Knapsack Problem with Reservation and Removability
PRESENTER: Matthias Gehnen

ABSTRACT. In the online simple knapsack problem, a knapsack of unit size 1 is given and an algorithm is tasked to fill it using a set of items that are revealed one after another. Each item must be accepted or rejected at the time they are presented, and these decisions are irrevocable. No prior knowledge about the set and sequence of items is given. The goal is then to maximize the sum of the sizes of all packed items compared to an optimal packing of all items of the sequence.

In this paper, we combine two existing variants of the problem that each extend the range of possible actions for a newly presented item by a new option. The first is removability, in which an item that was previously packed into the knapsack may be finally discarded at any point. The second is reservations, which allows the algorithm to delay the decision on accepting or rejecting a new item indefinitely for a proportional fee relative to the size of the given item.

If both removability and reservations are permitted, we show that the competitive ratio of the online simple knapsack problem rises depending on the relative reservation costs. As soon as any nonzero fee has to be paid for a reservation, no online algorithm can be better than 1.5-competitive. With rising reservation costs, this competitive ratio increases up to the golden ratio ($\phi\approx 1.618$) that is reached for relative reservation costs of $1-\frac{\sqrt{5}}{3}\approx0.254$. We provide a matching upper and lower bound for relative reservation costs up to this value. From this point onward, the tight bound by Iwama and Taketomi for the removable knapsack problem is the best possible competitive ratio, not using any reservations.

14:25
Polynomial-Delay Enumeration of Large Maximal Common Independent Sets in Two Matroids
PRESENTER: Kazuhiro Kurita

ABSTRACT. Finding a \emph{maximum} cardinality common independent set in two matroids (also known as \textsc{Matroid Intersection}) is a classical combinatorial optimization problem, which generalizes several well-known problems, such as finding a maximum bipartite matching, a maximum colorful forest, and an arborescence in directed graphs. Enumerating all \emph{maximal} common independent sets in two (or more) matroids is a classical enumeration problem. In this paper, we address an ``intersection'' of these problems: Given two matroids and a threshold $\tau$, the goal is to enumerate all maximal common independent sets in the matroids with cardinality at least $\tau$. We show that this problem can be solved in polynomial delay and polynomial space. We also discuss how to enumerate all maximal common independent sets of two matroids in non-decreasing order of their cardinalities.

14:50
Rényi-Ulam Games and Online Computation with Imperfect Advice

ABSTRACT. We study the nascent setting of online computation with imperfect advice, in which the online algorithm is enhanced by some prediction encoded in the form of an imperfect, and possibly erroneous binary string. The algorithm is oblivious to the advice error, but defines a desired tolerance, namely an upper bound on the number of erroneous advice bits it can tolerate. This is a model that generalizes the Pareto-based advice model, in which the performance of the algorithm is only evaluated at the extreme values of error (namely, if the advice has either no errors, or if it is generated adversarially). It also subsumes the model in which the algorithm elicits a prediction on the online sequence, via imperfect responses to a number of binary queries.

In this work, we establish connections between games with a lying responder, also known as Rényi-Ulam games, and the design and analysis of online algorithms with imperfect advice. Specifically, we demonstrate how to obtain upper and lower bounds on the competitive ratio for important online problems such as time-series search, online bidding, and fractional knapsack. Our techniques provide the first lower bounds for online problems in this model. We also highlight and exploit connections between competitive analysis with imperfect advice and fault-tolerance in multiprocessor systems. Last, we show how to waive the dependence on the tolerance parameter, by means of resource augmentation and robustification.

14:00-15:15 Session 15B: Verification 2 and Computational Complexity
Location: Amphi G
14:00
Deciding Predicate Logical Theories of Real-Valued Functions

ABSTRACT. The notion of a real-valued function is central to mathematics, computer science, and many other scientific fields. Despite this importance, there are hardly any positive results on decision procedures for predicate logical theories that reason about real-valued functions. This paper defines a first-order predicate language for reasoning about multi-dimensional smooth real functions and their derivatives, and demonstrates that - despite the obvious undecidability barriers - certain positive decidability results for such a language are indeed possible.

14:25
Checking Presence Reachability Properties on Parameterized Shared-Memory Systems

ABSTRACT. We consider the verification of distributed systems composed of an arbitrary number of asynchronous processes. Processes are identical finite-state machines communicating by reading from and writing to a shared memory. Beyond the standard model with finitely many registers, we tackle round-based shared-memory systems with fresh registers at each round. In the latter model, both the number of processes and the number of registers are unbounded, making verification particularly challenging. The properties studied are generic presence reachability objectives that subsume classical questions such as safety or synchronization by expressing the presence or absence of processes in some states. In the more general round-based setting, we establish that the parameterized verification of presence reachability properties is PSPACE-complete. Moreover, for the roundless model with finitely many registers, we prove that the complexity drops down to NP-complete and we provide several natural restrictions that make the problem solvable in polynomial time.

14:50
Query Complexity of Search Problems
PRESENTER: Yogesh Dahiya

ABSTRACT. We relate various complexity measures like sensitivity, block sensitivity, certificate complexity for multi-output functions to the query complexities of such functions. Using these relations, we provide the following improvements upon the known relationship between pseudo-deterministic and deterministic query complexity for total search problems: first, we show that deterministic query complexity is at most the third power of its pseudo-deterministic query complexity. Previously, a fourth-power relation was shown by Goldreich, Goldwasser and Ron (ITCS'13); and second, we improve the known separation between pseudo-deterministic and randomised decision tree size for total search problems in two ways: (a) we exhibit an $\exp(\widetilde{\Omega}(n^{1/4}))$ separation for the SearchCNF relation for random $k$-CNFs. This seems to be the first exponential lower bound on the pseudo-deterministic size complexity of SearchCNF associated with random $k$-CNFs, (b) we exhibit an $\exp(\Omega(n))$ separation for the Approximate Hamming Weight relation. The previous best known separation for any relation was $\exp(\Omega(n^{1/2}))$. We also separate pseudo-determinism from randomness in AND and AND-OR decision trees, and determinism from pseudo-determinism in PARITY decision trees. For a hypercube colouring problem, that was introduced by Goldwasswer, Impagliazzo, Pitassi and Santhanam (CCC'21) to analyze the pseudo-deterministic complexity of a complete problem in TFNP-dt, we prove that either the \emph{monotone} block-sensitivity or the \emph{anti-monotone} block sensitivity is $\Omega(n^{1/3})$; previously, Goldwasser, Impagliazzo, Pitassi and Santhanam showed an $\Omega(n^{1/2})$ bound for \emph{general} block-sensitivity.

15:15-15:45Coffee Break
15:45-17:00 Session 16A: Circuit Complexity
Location: Amphi F
15:45
Lower bounds for Choiceless Polynomial Time via Symmetric XOR-circuits

ABSTRACT. Choiceless Polynomial Time (CPT) is one of the few remaining candidate logics for capturing PTIME. In this paper, we make progress towards separating CPT from polynomial time by firstly establishing a connection between the expressive power of CPT and the existence of certain symmetric circuit families, and secondly, proving lower bounds against these circuits. We focus on the isomorphism problem of unordered Cai-Fürer-Immerman-graphs (the CFI-query) as a potential candidate for separating CPT from P. Results by Dawar, Richerby and Rossman, and subsequently by Pakusa, Schalthöfer and Selman show that the CFI-query is CPT-definable on linearly ordered and preordered base graphs with small colour classes. We define a class of CPT-algorithms, that we call "CFI-symmetric algorithms", which generalises all the known ones, and show that such algorithms can only define the CFI-query on a given class of base graphs if there exists a family of symmetric XOR-circuits with certain properties. These properties include that the circuits have the same symmetries as the base graphs, are of polynomial size, and satisfy certain fan-in restrictions. Then we prove that such circuits with slightly strengthened requirements (i.e. stronger symmetry and fan-in and fan-out restrictions) do not exist for the n-dimensional hypercubes as base graphs. This almost separates the CFI-symmetric algorithms from polynomial time - up to the gap that remains between the circuits whose existence we can currently disprove and the circuits whose existence is necessary for the definability of the CFI-query by a CFI-symmetric algorithm.

16:10
Depth-3 Circuits for Inner Product
PRESENTER: Ziyi Guan

ABSTRACT. What is the Sigma_3^2-circuit complexity (depth 3, bottom-fanin 2) of the 2n-bit inner product function? The complexity is known to be exponential 2^{alpha_n n} for some alpha_n=Omega(1). We show that the limiting constant alpha = limsup alpha_n satisfies

0.847... <= \alpha <= 0.965... .

Determining alpha is one of the seemingly-simplest open problems about depth-3 circuits. The question was recently raised by Golovnev, Kulikov, and Williams (ITCS 2021) and Frankl, Gryaznov, and Talebanfard (ITCS 2022), who observed that alpha in [0.5,1]. To obtain our improved bounds, we analyse a covering LP that captures the Sigma_3^2-complexity up to polynomial factors. In particular, our lower bound is proved by constructing a feasible solution to the dual LP.

16:35
Exponential Lower Bounds for Threshold Circuits of Sub-Linear Depth and Energy
PRESENTER: Kei Uchizawa

ABSTRACT. In this paper, we investigate computational power of threshold circuits and other theoretical models of neural networks in terms of the following four complexity measures: size (the number of gates), depth, weight and energy. Here the energy complexity of a circuit is defined as the maximum number of gates outputting non-zero values taken over all the input assignments, and measures sparsity of their computation.

As our man result, we prove that any threshold circuit $C$ of size $s$, depth $d$, energy $e$ and weight $w$ satisfies $\log (rk(M_C)) \le ed (\log s + \log w + \log n)$, where $rk(M_C)$ is the rank of the communication matrix $M_C$ of a $2n$-variable Boolean function that $C$ computes. Thus, such a threshold circuit $C$ is able to compute only a Boolean function of which communication matrix has rank bounded by a product of logarithmic factors of $s,w$ and linear factors of $d,e$. This implies an exponential lower bound on the size of even sublinear-depth threshold circuit if energy and weight are sufficiently small. For example, we can obtain an exponential lower bound $s = 2^{\Omega(n^{1/3})}$ even for threshold circuits of depth $n^{1/3}$, energy $n^{1/3}$ and weight $2^{o(n^{1/3})}$. We also show that the inequality is tight up to a constant factor when the depth $d$ and energy $e$ satisfies $ed = o(n/ \log n)$.

For other models of neural networks such as a discretized ReLE circuits and decretized sigmoid circuits, we prove that a similar inequality holds for a discretized circuit $C$: $rk(M_C) = O(ed(\log s + \log w + \log n)^3)$. We obtain this inequality by showing that any discretized circuit can be simulated by a threshold circuit with moderate increase of size, depth, energy and weight. Thus, if we consider the number of non-zero output values as a measure for sparse activity of a neural network, our results suggest that larger depth helps neural networks to acquire sparse activity.

15:45-17:00 Session 16B: Games 2
Location: Amphi G
15:45
Entropic Risk for Turn-Based Stochastic Games
PRESENTER: Jakob Piribauer

ABSTRACT. Entropic risk (ERisk) is an established risk measure in finance, quantifying risk by an exponential re-weighting of rewards. We study ERisk for the first time in the context of turn-based stochastic games with the total reward objective. This gives rise to an objective function that demands the control of systems in a risk-averse manner. We show that the resulting games are determined and, in particular, admit optimal memoryless deterministic strategies. This contrasts risk measures that previously have been considered in the special case of Markov decision processes and that require randomization and/or memory. We provide several results on the decidability and the computational complexity of the threshold problem, i.e. whether the optimal value of ERisk exceeds a given threshold. In the most general case, the problem is decidable subject to Shanuel’s conjecture. If all inputs are rational, the resulting threshold problem can be solved using algebraic numbers, leading to decidability via a polynomial-time reduction to the existential theory of the reals. Further restrictions on the encoding of the input allow the solution of the threshold problem in NP ∩ coNP. Finally, an approximation algorithm for the optimal value of ERisk is provided.

16:10
Parameterized Analysis of the Cops and Robber Game

ABSTRACT. \textit{Pursuit-evasion games} have been intensively studied for several decades due to their numerous applications in artificial intelligence, robot motion planning, database theory, distributed computing, and algorithmic theory. \textsc{Cops and Robber} (\CR) is one of the most well-known pursuit-evasion games played on graphs, where multiple \textit{cops} pursue a single \textit{robber}. The aim is to compute the \textit{cop number} of a graph, $k$, which is the minimum number of cops that ensures the \textit{capture} of the robber.

From the viewpoint of parameterized complexity, \CR is W[2]-hard parameterized by $k$~[Fomin et al., TCS, 2010]. Thus, we study structural parameters of the input graph. We begin with the \textit{vertex cover number} ($\mathsf{vcn}$). First, we establish that $k \leq \frac{\mathsf{vcn}}{3}+1$. Second, we prove that \CR parameterized by $\mathsf{vcn}$ is \FPT by designing an exponential kernel. We complement this result by showing that it is unlikely for \CR parameterized by $\mathsf{vcn}$ to admit a polynomial compression. We extend our exponential kernels to the parameters \textit{cluster vertex deletion number} and \textit{deletion to stars number}, and design a linear vertex kernel for \textit{neighborhood diversity}. Additionally, we extend all of our results to several well-studied variations of \CR.

16:35
Approximating the Value of Energy-Parity Objectives in Simple Stochastic Games
PRESENTER: Mohan Dantam

ABSTRACT. We consider simple stochastic games G with energy-parity objectives, a combination of quantitative rewards with a qualitative parity condition. The Maximizer tries to avoid running out of energy while simultaneously satisfying a parity condition. We present an algorithm to approximate the value of a given configuration in 2-NEXPTIME. Moreover, epsilon-optimal strategies for either player require at most O(2-EXP(|G|)*log(1/epsilon)) memory modes.