MFCS 2023: 48TH INTERNATIONAL SYMPOSIUM ON MATHEMATICAL FOUNDATIONS OF COMPUTER SCIENCE
PROGRAM FOR TUESDAY, AUGUST 29TH
Days:
previous day
next day
all days

View: session overviewtalk overview

08:30-08:55Coffee Break
08:55-09:00 Session 6: MFCS 2024

Presentation of the upcoming MFCS 2024 by Jerzy Marcinkowski, member of the MFCS steering commitee.

Location: Amphi F
09:00-10:00 Session 7

Invited Talk

Location: Amphi F
09:00
Algebraic Reasoning for (Un)Solvable Loops
10:00-10:30Coffee Break
10:30-12:10 Session 8A: Algebraic Methods
Location: Amphi F
10:30
Realizing finitely presented groups as projective fundamental groups of SFTs

ABSTRACT. Subshifts are sets of colourings -- or tilings -- of the plane, defined by local constraints. Historically introduced as discretizations of continuous dynamical systems, they are also heavily related to computability theory. In this article, we study a conjugacy invariant for subshifts, known as the projective fundamental group. It is defined via paths inside and between configurations. We show that any finitely presented group can be realized as a projective fundamental group of some SFT.

10:55
Multivariate to Bivariate Reduction for Noncommutative Polynomial Factorization
PRESENTER: Pushkar Joglekar

ABSTRACT. Based on a theorem of Bergman \cite{Cohnrelations} we show that multivariate noncommutative polynomial factorization is deterministic polynomial-time reducible to the factorization of bivariate noncommutative polynomials. More precisely, we show the following: \begin{enumerate} \item In the white-box setting, given an $n$-variate noncommutative polynomial $f\in\FX$ over a field $\F$ (either a finite field or the rationals) as an arithmetic circuit (or algebraic branching program), computing a complete factorization of $f$ into irreducible factors is deterministic polynomial-time reducible to white-box factorization of a noncommutative bivariate polynomial $g\in\F\angle{x,y}$; the reduction transforms $f$ into a circuit for $g$ (resp.\ ABP for $g$), and given a complete factorization of $g$ (namely, arithmetic circuits (respect. ABPs) for irreducible factors of $g$) the reduction recovers a complete factorization of $f$ in polynomial time.

We also obtain a similar deterministic polynomial-time reduction in the black-box setting.

\item Additionally, we show over the field of rationals that bivariate linear matrix factorization of $4\times 4$ matrices is at least as hard as factoring square-free integers. This indicates that reducing noncommutative polynomial factorization to linear matrix factorization (as done in \cite{AJ22}) is unlikely to succeed over the field of rationals even in the bivariate case. In contrast, multivariate linear matrix factorization for $3\times 3$ matrices over rationals is in polynomial time. \end{enumerate}

11:20
A Weyl Criterion for Finite-State Dimension and Applications
PRESENTER: Subin Pulari

ABSTRACT. Finite-state dimension, introduced early in this century as a finite-state version of classical Hausdorff dimension, is a quantitative measure of the lower asymptotic {\it density of information} in an infinite sequence over a finite alphabet, as perceived by finite automata. Finite-state dimension is a robust concept that now has equivalent formulations in terms of finite-state gambling, lossless finite-state data compression, finite-state prediction, entropy rates, and automatic Kolmogorov complexity. The 1972 Schnorr-Stimm dichotomy theorem gave the first automata-theoretic characterization of normal sequences, which had been studied in analytic number theory since Borel defined them in 1909. This theorem implies, in present-day terminology, that a sequence (or a real number having this sequence as its base-b expansion) is normal if and only if it has finite-state dimension 1. One of the most powerful classical tools for investigating normal numbers is the 1916 Weyl's criterion, which characterizes normality in terms of exponential sums. Such sums are well studied objects with many connections to other aspects of analytic number theory, and this has made use of Weyl's criterion especially fruitful. This raises the question whether Weyl's criterion can be generalized from finite-state dimension 1 to arbitrary finite-state dimensions, thereby making it a quantitative tool for studying data compression, prediction, etc. i.e., \emph{Can we characterize all compression ratios using exponential sums?}.

This paper does exactly this. We extend Weyl's criterion from a characterization of sequences with finite-state dimension 1 to a criterion that characterizes every finite-state dimension. This turns out {\em not} to be a routine generalization of the original Weyl criterion. Even though exponential sums may diverge for non-normal numbers, finite-state dimension can be characterized in terms of the \emph{dimensions} of the \emph{subsequence limits} of the exponential sums. In case the exponential sums are convergent, they converge to the Fourier coefficients of a probability measure whose \emph{dimension} is precisely the finite-state dimension of the sequence.

This new and surprising connection helps us bring Fourier analytic techniques to bear in proofs in finite-state dimension, yielding a new perspective. We demonstrate the utility of our criterion by substantially improving known results about preservation of finite-state dimension under arithmetic. We strictly generalize the results by Aistleitner and Doty, Lutz and Nandakumar for finite-state dimensions under arithmetic operations. We use the method of exponential sums and our Weyl criterion to obtain the following new result: \emph{If $y$ is a number having finite-state strong dimension 0, then $\dim_{FS}(x+qy)=\dim_{FS}(x)$ and $\Dim_{FS}(x+qy)=\Dim_{FS}(x)$ for any $x \in \R$ and $q \in \Q$.} This generalization uses recent estimates obtained in the work of Hochman regarding the entropy of convolutions of probability measures.

11:45
On the complexity dichotomy for the satisfiability of systems of term equations over finite algebras

ABSTRACT. For a fixed finite algebra $\mathbf{A}$, we consider the decision problem SysTerm$(\mathbf{A})$: does a given system of term equations have a solution in $\mathbf{A}$? This is equivalent to a constraint satisfaction problem (CSP) for a relational structure whose relations are the graphs of the basic operations of $\mathbf{A}$. From the complexity dichotomy for CSP over fixed finite templates due to Bulatov (2017) and Zhuk (2017), it follows that SysTerm$(\mathbf{A})$ for a finite algebra $\mathbf{A}$ is in P if $\mathbf{A}$ has a not necessarily idempotent Taylor polymorphism and is NP-complete otherwise. More explicitly, we show that for a finite algebra $\mathbf{A}$ in a congruence modular variety (e.g. for a quasigroup), SysTerm$(\A)$ is in P if the core of $\mathbf{A}$ is abelian and is NP-complete otherwise. Given $\mathbf{A}$ by the graphs of its basic operations, we show that this condition for tractability can be decided in quasi-polynomial time.

10:30-12:10 Session 8B: Parametrized Complexity
Location: Amphi G
10:30
On the Complexity of Computing Time Series Medians under the Move-Split-Merge Metric

ABSTRACT. We initiate a study of the complexity of MSM-Median, the problem of computing a median of a set of k real-valued time series under the move-split-merge distance. This distance measure is based on three operations:  moves, which may shift a data point in a time series; splits, which replace one data point in a time series by two consecutive data points of the same value; and merges, which replace two consecutive data points of equal value by a single data point of the same value.  The cost of a move operation is the difference of the data point value before and after the operation, the cost of split and merge operations is defined via a given constant c. Our main results are as follows. First, we show that MSM-Median is NP-hard and W[1]-hard with respect to k for time series with at most three distinct values.  Under the Exponential Time Hypothesis (ETH) our reduction implies that a previous dynamic programming algorithm with running time |I|^{\Oh(k)} [Holznigenkemper et al., Data Min.~Knowl.~Discov.~'23] is essentially optimal.  Here, |I| denotes the total input size. Second, we show that MSM-Median can be solved in 2^{\Oh(d/c)}\cdot |I|^{\Oh(1)} time where d  is the total distance of the median to the input time series.

 

 

10:55
A FPT Algorithm for Spanning Trees with Few Branch Vertices Parameterized by Modular Width
PRESENTER: Luisa Gargano

ABSTRACT. The minimum branch vertices spanning tree problem consists in finding a spanning tree T of an input graph G having the minimum number of branch vertices, that is, vertices of degree at least three in T. This NP-hard problem has been widely studied in the literature and has many important applications in network design and optimization. Algorithmic and combinatorial aspects of the problem have been extensively studied and its fixed parameter tractability has been recently considered. In this paper we focus on modular width and show that the problem of finding a spanning tree with the minimum number of branch vertices is FPT with respect to this parameter.

11:20
Deterministic Constrained Multilinear Detection
PRESENTER: Cornelius Brand

ABSTRACT. We extend the algebraic techniques of Brand and Pratt (ICALP'21) for deterministic detection of $k$-multilinear monomials in a given polynomial with non-negative coefficients to the more general situation of detecting \emph{colored} $k$-multilinear monomials that satisfy additional constraints on the multiplicities of the colors appearing in them. Our techniques can be viewed as a characteristic-zero generalization of the algebraic tools developed by Guillemot and Sikora (MFCS'10) and Björklund, Kaski and Kowalik (STACS'13)

As applications, we recover the state-of-the-art deterministic algorithms for the $\textsc{Graph Motif}$ problem due to Pinter, Schachnai and Zehavi (MFCS'14), and give new deterministic algorithms for generalizations of certain questions on colored directed spanning trees or bipartite planar matchings running in deterministic time $O^\ast(4^k)$, studied originally by Gutin, Reidl, Wahlström and Zehavi (J. Comp. Sys. Sci. 95, '18). Finally, we give improved randomized algorithms for intersecting three and four matroids of rank $k$, improving the record bounds of Brand and Pratt (ICALP'21) from $O^\ast(64^k)$ and $O^\ast(256^k)$, respectively, to $O^\ast(4^k)$.

11:45
FPT Approximation and Subexponential Algorithms for Covering Few or Many Edges
PRESENTER: Fedor V. Fomin

ABSTRACT. We study the $\alpha$-Fixed Cardinality Graph Partitioning ($\alpha$-FCGP) problem, the generic local graph partitioning problem introduced by Bonnet et al. [Algorithmica 2015]. In this problem, we are given a graph $G$, two numbers $k,p$ and $0\leq\alpha\leq 1$, the question is whether there is a set $S\subseteq V$ of size $k$ with a specified coverage function $cov_{\alpha}(S)$ at least $p$ (or at most $p$ for the minimization version). The coverage function $cov_{\alpha}(\cdot)$ counts edges with exactly one endpoint in $S$ with weight $\alpha$ and edges with both endpoints in $S$ with weight $1 - \alpha$. $\alpha$-FCGP generalizes a number of fundamental graph problems such as Densest $k$-Subgraph, Max $k$-Vertex Cover, and Max $(k,n-k)$-Cut.

A natural question in the study of $\alpha$-FCGP is whether the algorithmic results known for its special cases, like Max $k$-Vertex Cover, could be extended to more general settings. One of the simple but powerful methods for obtaining parameterized approximation [Manurangsi, SOSA 2019] and subexponential algorithms [Fomin et al. IPL 2011] for Max $k$-Vertex Cover is based on the greedy vertex degree orderings. The main insight of our work is that the idea of greed vertex degree ordering could be used to design fixed-parameter approximation schemes (FPT-AS) for $\alpha > 0$ and the subexponential-time algorithms for the problem on apex-minor free graphs for maximization with $\alpha > 1/3$ and minimization with $\alpha < 1/3$.

12:10-14:00Lunch Break
14:00-15:15 Session 9A: Constructive Logics and Semantics
Location: Amphi F
14:00
Formalizing Hyperspaces for Extracting Efficient Exact Real Computation
PRESENTER: Holger Thies

ABSTRACT. We propose a framework for certified computation on hyperspaces by formalizing various higher-order data types and operations in a constructive dependent type theory. Our approach builds on our previous work on axiomatization of exact real computation where we formalize nondeterministic first-order partial computations over real and complex numbers. Based on the axiomatization, we first define open, closed, compact and overt subsets in an abstract topological way that allows short and elegant proofs with computational content coinciding with standard definitions in computable analysis. From these proofs we extract programs for testing inclusion, overlapping of sets, et cetera.

Our framework specializes the Euclidean space $\IR^m$ by making use of metric properties, improving the extracted programs. To define interesting operations over hyperspaces of Euclidean space, we introduce a nondeterministic version of a continuity principle valid under the standard type-2 realizability interpretation. Instead of choosing one of the usual formulations of continuity, we formulate it in a way that is close to an interval extension operator, which often is already available in exact real computation software.

We prove that the operations on subsets preserve the encoding, and thereby define a small calculus to built new subsets from given ones, including limits of converging sequences with regards to Hausdorff metric. From the proofs, we extract programs that generate drawings of subsets of $\IR^m$ with any given precision efficiently. As an application we provide a function that constructs fractals, such as the Sierpinski triangle, from iterated function systems using the limit operation, resulting in certified programs that errorlessly draw such fractals.

14:25
Inductive Continuity via Brouwer Trees
PRESENTER: Ayberk Tosun

ABSTRACT. Continuity is a key principle of intuitionistic logic that is generally accepted by constructivists but is inconsistent with classical logic. Most commonly, continuity states that a function from the Baire space to numbers, only needs approximations of the points in the Baire space to compute. Recently, a stronger formulation of the continuity principle was put forward. It states that for any function F from the Baire space to numbers, there exists a (dialogue) tree that contains the values of F at its leaves and such that the modulus of $F$ at each point of the Baire space is given by the length of the corresponding branch in the tree. In this paper we provide the first internalization of the strong continuity principle within a computational setting. Concretely, we present a class of intuitionistic theories that validate this stronger form of continuity thanks to computations that construct such dialogue trees internally to the theories using effectful computations. We further demonstrate that the strong continuity principle indeed implies other forms of continuity principles.

14:50
Locality Theorems in Semiring Semantics

ABSTRACT. Semiring semantics of first-order logic generalises classical Boolean semantics by permitting truth values from a commutative semiring, which can model information such as costs or access restrictions. This raises the question to what extent classical model theoretic properties still apply, and how this depends on the algebraic properties of the semiring.

In this paper, we study this question for the classical locality theorems due to Hanf and Gaifman. We prove that Hanf's locality theorem generalises to all semirings with idempotent operations, but fails for many non-idempotent semirings. We then consider Gaifman normal forms and show that for formulae with free variables, Gaifman's theorem does not generalise beyond the Boolean semiring. Also for sentences, it fails in the natural semiring and the tropical semiring. Our main result, however, is a constructive proof of the existence of Gaifman normal forms for min-max and lattice semirings. The proof implies a stronger version of Gaifman's classical theorem in Boolean semantics: every sentence has a Gaifman normal form which does not add negations.

14:00-15:15 Session 9B: Distributed and Parallel Algorithms
Location: Amphi G
14:00
Distributed CONGEST Algorithm for Finding Hamiltonian Paths in Dirac Graphs and Generalizations
PRESENTER: Moti Medina

ABSTRACT. We study the problem of finding a Hamiltonian cycle under the promise that the input graph has a minimum degree of at least $n/2$, where n denotes the number of vertices in the graph. The classical theorem of Dirac states that such graphs (a.k.a. Dirac graphs) are Hamiltonian, i.e., contain a Hamiltonian cycle. Moreover, finding a Hamiltonian cycle in Dirac graphs can be done in polynomial time in the classical centralized model.

This paper presents a randomized distributed CONGEST algorithm that finds w.h.p. a Hamiltonian cycle (as well as maximum matching) within O(log n) rounds under the promise that the input graph is a Dirac graph. This upper bound is in contrast to general graphs in which both the decision and search variants of Hamiltonicity require $\tilde{\Omega}(n^2)$ rounds, as shown by Bachrach et al. [PODC'19].

In addition, we consider two generalizations of Dirac graphs: Ore graphs and Rahman- Kaykobad graphs [IPL’05]. In Ore graphs, the sum of the degrees of every pair of non-adjacent vertices is at least $n$, and in Rahman-Kaykobad graphs, the sum of the degrees of every pair of non-adjacent vertices plus their distance is at least $n + 1$. We show how our algorithm for Dirac graphs can be adapted to work for these more general families of graphs.

14:25
Descriptive complexity for distributed computing with circuits
PRESENTER: Antti Kuusisto

ABSTRACT. We consider distributed algorithms in the realistic scenario where distributed message passing is operated by circuits. We show that within this setting, modal substitution calculus MSC precisely captures the expressive power of circuits. The result is established via constructing translations that are highly efficient in relation to size. We also observe that the colouring algorithm based on Cole-Vishkin can be specified by logarithmic size programs (and thus also logarithmic size circuits) in the bounded-degree scenario.

14:50
Parallel enumeration of parse trees

ABSTRACT. A parallel algorithm for enumerating parse trees of a given string according to a fixed context-free grammar is defined. The algorithm computes the number of parse trees of an input string; more generally, it applies to computing the weight of a string in a weighted grammar. The algorithm is first implemented on an arithmetic circuit of depth $O((\log n)^2)$ with $O(n^6)$ elements. Then, it is improved using fast matrix multiplication to use only $O(n^{5.38})$ elements, while preserving depth $O((\log n)^2)$.

15:15-15:45Coffee Break
15:45-17:00 Session 10A: Cryptography, Security, and Quantum
Location: Amphi F
15:45
Speed Me up if You Can: Conditional Lower Bounds on Opacity Verification
PRESENTER: Jiří Balun

ABSTRACT. Opacity is a property of privacy and security applications asking whether, given a system model, a passive intruder that makes online observations of system’s behaviour can ascertain some “secret” information of the system. Deciding opacity is a PSpace-complete problem, and hence there are no polynomial-time algorithms to verify opacity under the assumption that PSpace differs from PTime. This assumption, however, gives rise to a question whether the existing exponential-time algorithms are the best possible or whether there are faster, sub-exponential-time algorithms. We show that under the (Strong) Exponential Time Hypothesis, there are no algorithms that would be significantly faster than the existing algorithms. As a by-product, we obtained a new conditional lower bound on the time complexity of deciding universality (and therefore also inclusion and equivalence) for nondeterministic finite automata.

16:10
Cryptanalysis of a Generalized Subset-Sum Pseudorandom Generator

ABSTRACT. We present attacks on a generalized subset-sum pseudorandom generator, which was proposed by von zur Gathen and Shparlinski in 2004. Our attacks rely on a sub-quadratic algorithm for solving a vectorial variant of the 3SUM problem, which is of independent interest. The attacks presented have complexities well below the brute-force attack, making the generators vulnerable. We provide a thorough analysis of the attacks and their complexities and demonstrate their practicality through implementations and experiments.

16:35
Distributed Merlin-Arthur Synthesis of Quantum States and Its Applications

ABSTRACT. The generation and verification of quantum states are fundamental tasks for quantum information processing that have recently been investigated by Irani, Natarajan, Nirkhe, Rao and Yuen [CCC 2022], Rosenthal and Yuen [ITCS 2022], Metger and Yuen [QIP 2023] under the term \emph{state synthesis}. This paper studies this concept from the viewpoint of quantum distributed computing, and especially distributed quantum Merlin-Arthur (dQMA) protocols. We first introduce a novel task, on a line, called state generation with distributed inputs (SGDI). In this task, the goal is to generate the quantum state $U\ket{\psi}$ at the rightmost node of the line, where $\ket{\psi}$ is a quantum state given at the leftmost node and $U$ is a unitary matrix whose description is distributed over the nodes of the line. We give a dQMA protocol for SGDI and utilize this protocol to construct a dQMA protocol for the Set Equality problem studied by Naor, Parter and Yogev [SODA 2020], and complement our protocol by showing classical lower bounds for this problem. Our second contribution is a dQMA protocol, based on a recent work by Zhu and Hayashi [Physical Review A, 2019], to create EPR-pairs between adjacent nodes of a network without quantum communication. As an application of this dQMA protocol, we prove a general result showing how to convert any dQMA protocol on an arbitrary network into another dQMA protocol where the verification stage does not require any quantum communication.

15:45-17:00 Session 10B: Verification 1
Location: Amphi G
15:45
The Geometry of Reachability in Continuous Vector Addition Systems with States
PRESENTER: Tim Leys

ABSTRACT. We study the geometry of reachability sets of continuous vector addition systems with states (VASS). In particular we establish that they are “almost” Minkowski sums of convex cones and zonotopes generated by the vectors labelling the transitions of the VASS. We use the latter to prove that short so-called linear path schemes suffice as witnesses of reachability in continuous VASS of fixed dimension. Then, we give new polynomial-time algorithms for the reachability problem for linear path schemes. Finally, we also establish that enriching the model with zero tests makes the reachability problem intractable already for linear path schemes of dimension two.

An online teaser is available at https://youtu.be/pbZnhTc9zss

16:10
Parikh One-Counter Automata
PRESENTER: Arka Ghosh

ABSTRACT. Counting abilities in finite automata are traditionally provided by two orthogonal extensions: adding a single counter that can be tested for zeroness at any point, or adding \(\mathbb{Z}\)-valued counters that are tested for equality only at the end of runs. In this paper, finite automata extended with both types of counters are introduced. They are called Parikh One-Counter Automata (POCA): the ``Parikh'' part referring to the evaluation of counters at the end of runs, and the ``One-Counter'' part to the single counter that can be tested during runs.

Their expressiveness, in the deterministic and nondeterministic variants, is investigated; it is shown in particular that there are deterministic POCA languages that cannot be expressed without nondeterminism in the original models. The natural decision problems are also studied; strikingly, most of them are no harder than in the original models. A parametric version of nonemptiness is also considered.

16:35
OBDD(join) Proofs Cannot be Balanced

ABSTRACT. We study OBDD-based propositional proof systems introduced in 2004 by Atserias, Kolaitis, and Vardi that prove the unsatisfiability of a CNF formula by deduction of an identically false OBDD from OBDDs representing clauses of the initial formula. We consider a proof system OBDD(∧) that uses only the conjunction (join) rule and a proof system OBDD(∧, reordering) (introduced in 2017 by Itsykson, Knop, Romashchenko, and Sokolov) that uses the conjunction (join) rule and the rule that allows changing the order of variables in OBDD. We study whether these systems can be balanced i.e. every refutation of size S can be reassembled into a refutation of depth O(log S) with at most a polynomial-size increase. We construct a family of unsatisfiable CNF formulas F_n such that F_n has a polynomial-size tree-like OBDD(∧) refutation of depth poly(n) and for arbitrary OBDD(∧, reordering) refutation Π of F_n for every α ∈ (0, 1) the following trade-off holds: either the size of Π is 2^Ω(n^α) or the depth of Π is Ω(n^(1−α)). As a corollary of the trade-offs, we get that OBDD(∧) and OBDD(∧, reordering) proofs cannot be balanced.