### Proceedings

The STACS 2016 proceedings are published in the Leibniz International Proceedings in Informatics (LIPICS) series. The volume of the proceedings is available online. To access a single paper from the program, click on the arrow next to the title.

### Program

14:00 - 17:30
##### Tutorial Session chair: Nicolas Ollinger
Jarkko Kari - Tutorial on Cellular Automata and Tilings
Cellular automata (CA) are massively parallel systems where a regular grid of finite symbols is updated according to a synchronous application of the same local update rule everywhere. A closely related concept is that of Wang tiles where a local relation between neighboring symbols determines allowed combinations of symbols in the grid. In this tutorial we start with classical results on cellular automata, such as the Garden-of-Eden theorems, the Curtis-Hedlund-Lyndon -theorem and the balance property of surjective cellular automata. We then discuss Wang tiles and, in particular, the concept of aperiodicity and the undecidability of the domino problem. The domino problem is the decision problem to determine if a given Wang tile set admits any valid tilings of the grid. We relate Wang tiles to cellular automata, and establish a number of undecidability results for cellular automata.
19:00
Welcome Reception
09:00 - 10:00
##### Plenary Session: Invited Talk chair: Heribert Vollmer
Carsten Lutz - Complexity and Expressive Power of Ontology-Mediated Queries
Data sets that have been collected from multiple sources or extracted from the web or often highly incomplete and heterogeneous, which makes them hard to process and query. One way to address this challenge is to use ontologies, which provide a way to assign a semantics to the data, to enrich it with domain knowledge, and to provide an enriched and uniform vocabulary for querying. The combination of a traditional database query with an ontology is called an ontology-mediated query (OMQ). The aim of this talk is to survey fundamental properties of OMQs such as their complexity, expressive power, descriptive strength, and rewritability into traditional query languages such as SQL and Datalog. A central observation is that there is a close and fruitful connection between OMQs and constraint satisfaction problems (CSPs) as well as related fragments of monadic NP, which puts OMQs into a more general perspective and gives raise to a number of interesting results.

Coffee break
10:20 - 11:35
##### Session 1B chair: Michal Pilipczuk
1A Anup Bhattacharya, Ragesh Jaiswal, and Amit Kumar.
The classical center based clustering problems such as $k$-means/median/center assume that the optimal clusters satisfy the locality property that the points in the same cluster are close to each other. A number of clustering problems arise in machine learning where the optimal clusters do not follow such a locality property. For instance, consider the $r$-gather clustering problem where there is an additional constraint that each of the clusters should have at least $r$ points or the capacitated clustering problem where there is an upper bound on the cluster sizes. Consider a variant of the $k$-means problem that may be regarded as a general version of such problems. Here, the optimal clusters $O_1, ..., O_k$ are an arbitrary partition of the dataset and the goal is to output $k$-centers $c_1, ..., c_k$ such that the objective function $\sum_{i=1}^{k} \sum_{x \in O_{i}} ||x - c_{i}||^2$ is minimized. It is not difficult to argue that any algorithm (without knowing the optimal clusters) that outputs a single set of $k$ centers, will not behave well as far as optimizing the above objective function is concerned. However, this does not rule out the existence of algorithms that output a list of such $k$ centers such that at least one of these $k$ centers behaves well. Given an error parameter $\varepsilon > 0$, let $\ell$ denote the size of the smallest list of $k$-centers such that at least one of the $k$-centers gives a $(1+\varepsilon)$ approximation w.r.t. the objective function above. In this paper, we show an upper bound on $\ell$ by giving a randomized algorithm that outputs a list of $2^{\tilde{O}(k/\varepsilon)}$ $k$-centers. We also give a closely matching lower bound of $2^{\tilde{\Omega}(k/\sqrt{\varepsilon})}$. Moreover, our algorithm runs in time $O \left(n d \cdot 2^{\tilde{O}(k/\varepsilon)} \right)$. This is a significant improvement over the previous result of Ding and Xu who gave an algorithm with running time $O \left(n d \cdot (\log{n})^{k} \cdot 2^{poly(k/\varepsilon)} \right)$ and output a list of size $O \left((\log{n})^k \cdot 2^{poly(k/\varepsilon)} \right)$. Our techniques generalize for the $k$-median problem and for many other settings where non-Euclidean distance measures are involved.
1B Raghav Kulkarni and Supartha Podder.
Let $H$ be a (non-empty) graph on $n$ vertices, possibly containing isolated vertices. Let $f_H(G) = 1$ iff the input graph $G$ on $n$ vertices contains $H$ as a (not necessarily induced) subgraph. Let $\alpha_H$ denote the cardinality of a maximum independent set of $H$. In this paper we show: $Q(f_H) = \Omega\left( \sqrt{\alpha_H \cdot n}\right),$ where $Q(f_H)$ denotes the quantum query complexity of $f_H$. As a consequence we obtain lower bounds for $Q(f_H)$ in terms of several other parameters of $H$ such as the average degree, minimum vertex cover, chromatic number, and the critical probability. We also use the above bound to show that $Q(f_H) = \Omega(n^{3/4})$ for any $H$, improving on the previously best known bound of $\Omega(n^{2/3})$ \cite{sy}. Until very recently, it was believed that the quantum query complexity is at least square root of the randomized one. Our $\Omega(n^{3/4})$ bound for $Q(f_H)$ matches the square root of the current best known bound for the randomized query complexity of $f_H$, which is $\Omega(n^{3/2})$ due to Groger. Interestingly, the randomized bound of $\Omega(\alpha_H \cdot n)$ for $f_H$ still remains open. We also study the Subgraph Homomorphism Problem, denoted by $f_{[H]}$, and show that $Q(f_{[H]}) = \Omega(n)$. Finally we extend our results to the $3$-uniform hypergraphs. In particular, we show an $\Omega(n^{4/5})$ bound for quantum query complexity of the Subgraph Isomorphism, improving on the previously known $\Omega(n^{3/4})$ bound. For the Subgraph Homomorphism, we obtain an $\Omega(n^{3/2})$ bound for the same.
1A Dimitris Fotakis, Michael Lampis, and Vangelis Th. Paschos.
It has long been known, since the classical work of (Arora, Karger, Karpinski, JCSS~99), that MAX-CUT admits a PTAS on dense graphs, and more generally, MAX-k-CSP admits a PTAS on dense'' instances with $\Omega(n^k)$ constraints. In this paper we extend and generalize their exhaustive sampling approach, presenting a framework for $(1-\varepsilon)$-approximating any MAX-k-CSP problem in sub-exponential time while significantly relaxing the denseness requirement on the input instance. Specifically, we prove that for any constants $\delta \in (0, 1]$ and $\varepsilon > 0$, we can approximate MAX-k-CSP problems with $\Omega(n^{k-1+\delta})$ constraints within a factor of $(1-\varepsilon)$ in time $2^{O(n^{1-\delta}\ln n /\varepsilon^3)}$. The framework is quite general and includes classical optimization problems, such as MAX-CUT, MAX-DICUT, MAX-k-SAT, and (with a slight extension) $k$-DENSEST SUBGRAPH, as special cases. For MAX-CUT in particular (where $k=2$), it gives an approximation scheme that runs in time sub-exponential in $n$ even for almost-sparse'' instances (graphs with $n^{1+\delta}$ edges). We prove that our results are essentially best possible, assuming the ETH. First, the density requirement cannot be relaxed further: there exists a constant $r < 1$ such that for all $\delta > 0$, MAX-k-SAT instances with $O(n^{k-1})$ clauses cannot be approximated within a ratio better than $r$ in time $2^{O(n^{1-\delta})}$. Second, the running time of our algorithm is almost tight \emph{for all densities}. Even for MAX-CUT there exists $r<1$ such that for all $\delta' > \delta >0$, MAX-CUT instances with $n^{1+\delta}$ edges cannot be approximated within a ratio better than $r$ in time $2^{n^{1-\delta'}}$.
1B Michael Elberfeld and Pascal Schweitzer.
Graph canonization is the problem of computing a unique representative, a canon, from the isomorphism class of a given graph. This implies that two graphs are isomorphic exactly if their canons are equal. We show that graphs of bounded tree width can be canonized in deterministic logarithmic space (logspace). This implies that the isomorphism problem for graphs of bounded tree width can be decided in logspace. In the light of isomorphism for trees being hard for the complexity class logspace, this makes the ubiquitous classes of graphs of bounded tree width one of the few classes of graphs for which the complexity of the isomorphism problem has been exactly determined.
1A Édouard Bonnet, Michael Lampis, and Vangelis Th. Paschos.
In this paper we focus on problems which do not admit a constant-factor approximation in polynomial time and explore how quickly their approximability improves as the allowed running time is gradually increased from polynomial to (sub-)exponential. We tackle a number of problems: For MIN INDEPENDENT DOMINATING SET, MAX INDUCED PATH, FOREST and TREE, for any $r(n)$, a simple, known scheme gives an approximation ratio of $r$ in time roughly $r^{n/r}$. We show that, for most values of $r$, if this running time could be significantly improved the ETH would fail. For MAX MINIMAL VERTEX COVER we give a non-trivial $\sqrt{r}$-approximation in time $2^{n/{r}}$. We match this with a similarly tight result. We also give a $\log r$-approximation for MIN ATSP in time $2^{n/r}$ and an $r$-approximation for MAX GRUNDY COLORING in time $r^{n/r}$. Furthermore, we show that MIN SET COVER exhibits a curious behavior in this super-polynomial setting: for any $\delta>0$ it admits an $m^\delta$-approximation, where $m$ is the number of sets, in just quasi-polynomial time. We observe that if such ratios could be achieved in polynomial time, the ETH or the Projection Games Conjecture would fail.
1B Mateus de Oliveira Oliveira.
In this work we study the relationship between size and treewidth of circuits computing variants of the element distinctness function. First, we show that for each $n$, any circuit of treewidth $t$ computing the element distinctness function $\delta_n:\{0,1\}^n\rightarrow \{0,1\}$ must have size at least $\Omega(\frac{n^2}{2^{O(t)} \log n})$. This result provides a non-trivial generalization of a super-linear lower bound for the size of Boolean formulas (treewidth $1$) due to Neciporuk. Subsequently, we turn our attention to read-once circuits, which are circuits where each variable labels at most one input vertex. For each $n$, we show that any read-once circuit of treewidth $t$ and size $s$ computing a variant $\tau_n:\{0,1\}^n\rightarrow \{0,1\}$ of the element distinctness function must satisfy the inequality $t\cdot \log s \geq \Omega(\frac{n}{\log n})$. Using this inequality in conjunction with known results in structural graph theory, we show that for each fixed graph $H$, read-once circuits computing $\tau_n$ which exclude $H$ as a minor must have size at least $\Omega(n^2/\log^{4} n)$. For certain well studied functions, such as the triangle-freeness function, this last lower bound can be improved to $\Omega(n^2/\log^2 n)$.

Coffee break
11:50 - 12:40
##### Session 2B chair: Bart M. P. Jansen
2A S. Akshay, Blaise Genest, Bruno Karelovic, and Nikhil Vyas.
The quantitative verification of Probabilistic Automata (PA) is undecidable in general. Unary PA are a simpler model where the choice of action is fixed. Still, the quantitative verification problem is open and known to be as hard as Skolem's problem, a problem on linear recurrence sequences, whose decidability is open for at least 40 years. In this paper, we approach this problem by studying the languages generated by unary PAs (as defined below), whose regularity would entail the decidability of quantitative verification. Given an initial distribution, we represent the trajectory of a unary PA over time as an infinite word over a finite alphabet, where the $n$th letter represents a probability range after $n$ steps. We extend this to a language of trajectories (a set of words), one trajectory for each initial distribution from a (possibly infinite) set. We show that if the eigenvalues of the transition matrix associated with the unary PA are all distinct positive real numbers, then the language is effectively regular. Further, we show that this result is at the boundary of regularity, as non-regular languages can be generated when the restrictions are even slightly relaxed. The regular representation of the language allows us to reason about more general properties, e.g., robustness of a regular property in a neighbourhood around a given distribution.
2B Rahul Arora, Ashu Gupta, Rohit Gurjar, and Raghunath Tewari.
The perfect matching problem has a randomized NC algorithm, using the celebrated Isolation Lemma of Mulmuley, Vazirani and Vazirani. The Isolation Lemma states that giving a random weight assignment to the edges of a graph ensures that it has a unique minimum weight perfect matching, with a good probability. We derandomize this lemma for K3,3-free and K5-free bipartite graphs. That is, we give a deterministic log-space construction of such a weight assignment for these graphs. Such a construction was known previously for planar bipartite graphs. Our result implies that the perfect matching problem for K3,3-free and K5-free bipartite graphs is in SPL. It also gives an alternate proof for an already known result  reachability for K3,3-free and K5-free graphs is in UL.
2A Nathanaël Fijalkow.
We consider the value 1 problem for probabilistic automata over finite words: it asks whether a given probabilistic automaton accepts words with probability arbitrarily close to 1. This problem is known to be undecidable. However, different algorithms have been proposed to partially solve it; it has been recently shown that the Markov Monoid algorithm, based on algebra, is the most correct algorithm so far. The first contribution of this paper is to give a characterisation of the Markov Monoid algorithm. The second contribution is to develop a profinite theory for probabilistic automata, called the prostochastic theory. This new framework gives a topological account of the value 1 problem, which in this context is cast as an emptiness problem. The above characterisation is reformulated using the prostochastic theory, allowing to give a modular proof.
2B Frederik Garbe and Richard Mycroft.
We consider the complexity of the Hamilton cycle decision problem when restricted to $k$-uniform hypergraphs $H$ of high minimum codegree $\delta(H)$. We show that for tight Hamilton cycles this problem is NP-hard even when restricted to $k$-uniform hypergraphs $H$ with $\delta(H) \geq \tfrac{n}{2} - C$, where~$n$ is the order of $H$ and $C$ is a constant which depends only on $k$. This answers a question raised by Karpinski, Rucinski and Szymanska. Additionally we give a polynomial-time algorithm which, for a sufficiently small constant $\varepsilon > 0$, determines whether or not a $4$-uniform hypergraph~$H$ on~$n$ vertices with $\delta(H) \geq \frac{n}{2} - \varepsilon n$ contains a Hamilton $2$-cycle. This demonstrates that some looser Hamilton cycles exhibit interestingly different behaviour compared to tight Hamilton cycles. A key part of the proof is a precise characterisation of all $4$-uniform hypergraphs $H$ on~$n$ vertices with $\delta(H) \geq \frac{n}{2} - \varepsilon n$ which do not contain a Hamilton $2$-cycle; this may be of independent interest. As an additional corollary of this characterisation, we obtain an exact Dirac-type bound for the existence of a Hamilton $2$-cycle in a large $4$-uniform hypergraph.

Lunch
14:10 - 15:25
##### Session 3B chair: Erik Jan van Leeuwen
3A Christoph Haase and Piotr Hofman.
Given two finite-state automata, are the Parikh images of the languages they generate equivalent? This problem was shown decidable in coNEXP by Huynh in 1985 within the more general setting of context-free commutative grammars. Huynh conjectured that a $\Pi_2^P$ upper bound might be possible, and Kopczynski and To established in 2010 such an upper bound when the size of the alphabet is fixed. The contribution of this paper is to show that the language equivalence problem for regular and context-free commutative grammars is actually coNEXP-complete. In addition, our lower bound immediately yields further coNEXP-completeness results for equivalence problems for regular commutative expressions, reversal-bounded counter automata and communication-free Petri nets. Finally, we improve both lower and upper bounds for language equivalence for exponent-sensitive commutative grammars.
3B Akanksha Agrawal, Daniel Lokshtanov, Amer E. Mouawad, and Saket Saurabh.
For a family of graphs $\mathcal{F}$, a graph $G$, and a positive integer $k$, the $\mathcal{F}$-DELETION problem asks whether we can delete at most $k$ vertices from $G$ to obtain a graph in $\mathcal{F}$. $\mathcal{F}$-DELETION generalizes many classical graph problems such as Vertex Cover, Feedback Vertex Set, and Odd Cycle Transversal. A graph $G = (V, \cup_{i=1}^{\alpha} E_{i})$, where the edge set of $G$ is partitioned into $\alpha$ color classes, is called an $\alpha$-edge-colored graph. A natural extension of the $\mathcal{F}$-DELETION problem to edge-colored graphs is the $\alpha$-SIMULTANEOUS $\mathcal{F}$-DELETION problem. In the latter problem, we are given an $\alpha$-edge-colored graph $G$ and the goal is to find a set $S$ of at most $k$ vertices such that each graph $G_i \setminus S$, where $G_i = (V, E_i)$ and $1 \leq i \leq \alpha$, is in $\mathcal{F}$. In this work, we study $\alpha$-SIMULTANEOUS $\mathcal{F}$-DELETION for $\mathcal{F}$ being the family of forests. In other words, we focus on the $\alpha$-SIMULTANEOUS FEEDBACK VERTEX SET ($\alpha$-SIMFVS) problem. Algorithmically, we show that, like its classical counterpart, $\alpha$-SIMFVS parameterized by $k$ is fixed-parameter tractable (FPT) and admits a polynomial kernel, for any fixed constant $\alpha$. In particular, we give an algorithm running in $2^ {O(\alpha k)}n^{O(1)}$ time and a kernel with $O(\alpha k^{3(\alpha + 1)})$ vertices. The running time of our algorithm implies that $\alpha$-SIMFVS is FPT even when $\alpha \in o(\log n)$. We complement this positive result by showing that for $\alpha \in O(\log n)$, where $n$ is the number of vertices in the input graph, $\alpha$-SIMFVS becomes $W[1]$-hard. Our positive results answer one of the open problems posed by Cai and Ye (MFCS 2014).
3A Štěpán Holub and Jeffrey Shallit.
We investigate the behavior of the periods and border lengths of random words over a fixed alphabet. We show that the asymptotic probability that a random word has a given maximal border length $k$ is a constant, depending only on $k$ and the alphabet size $\ell$. We give a recurrence that allows us to determine these constants with any required precision. This also allows us to evaluate the expected period of a random word. For the binary case, the expected period is asymptotically about $n-1.641$. We also give explicit formulas for the probability that a random word is unbordered or has maximum border length one.
3B Mithilesh Kumar and Daniel Lokshtanov.
A tournament is a directed graph $T$ such that every pair of vertices is connected by an arc. A feedback vertex set is a set $S$ of vertices in $T$ such that $T - S$ is acyclic. In this article we consider the FEEDBACK VERTEX SET problem in tournaments. Here the input is a tournament $T$ and an integer $k$, and the task is to determine whether $T$ has a feedback vertex set of size at most $k$. We give a new algorithm for FEEDBACK VERTEX SET IN TOURNAMENTS. The running time of our algorithm is upper-bounded by $O(1.6181^k + n^{O(1)})$ and by $O(1.466^n)$. Thus our algorithm simultaneously improves over the fastest known parameterized algorithm for the problem by Dom et al. running in time $O(2^kk^{O(1)} + n^{O(1)})$, and the fastest known exact exponential-time algorithm by Gaspers and Mnich with running time $O(1.674^n)$. On the way to proving our main result we prove a strengthening of a special case of a graph partitioning theorem due to Bollobas and Scott. In particular we show that the vertices of any undirected $m$-edge graph of maximum degree $d$ can be colored white or black in such a way that for each of the two colors, the number of edges with both endpoints of that color is between $m/4-d/2$ and $m/4$+$d/2$.
3A Paweł Gawrychowski, Tomohiro I, Shunsuke Inenaga, Dominik Köppl, and Florin Manea.
For $\alpha\geq 1$, an $\alpha$-gapped repeat in a word $w$ is a factor $uvu$ of $w$ such that $|uv|\leq \alpha |u|$; the two occurrences of a factor $u$ in such a repeat are called arms. Such a repeat is called maximal if its arms cannot be extended simultaneously with the same symbol to the right nor to the left. We show that the number of all maximal $\alpha$-gapped repeats occurring in words of length $n$ is upper bounded by $18\alpha n$, allowing us to construct an algorithm finding all maximal $\alpha$-gapped repeats of a word on an integer alphabet of size $n^{\mathcal{O}(1)}$ in $\mathcal{O}(\alpha n)$ time. This result is optimal as there are words that have $\Theta(\alpha n)$ maximal $\alpha$-gapped repeats. Our techniques can be extended to get comparable results in the case of $\alpha$-gapped palindromes, i.e., factors $uvu^{\intercal}$ with $|uv|\leq \alpha |u|$.
3B Eva-Maria Hols and Stefan Kratsch.
The SUBSET FEEDBACK VERTEX SET problem generalizes the classical FEEDBACK VERTEX SET problem and asks, for a given undirected graph $G=(V,E)$, a set $S \subseteq V$, and an integer $k$, whether there exists a set $X$ of at most $k$ vertices such that no cycle in $G-X$ contains a vertex of $S$. It was independently shown by Cygan et al. (ICALP '11, SIDMA '13) and Kawarabayashi and Kobayashi (JCTB '12) that SUBSET FEEDBACK VERTEX SET is fixed-parameter tractable for parameter $k$. Cygan et al. asked whether the problem also admits a polynomial kernelization. We answer the question of Cygan et al. positively by giving a randomized polynomial kernelization for the equivalent version where $S$ is a set of edges. In a first step we show that EDGE SUBSET FEEDBACK VERTEX SET has a randomized polynomial kernel parameterized by $|S|+k$ with $\mathcal{O}(|S|^2k)$ vertices. For this we use the matroid-based tools of Kratsch and Wahlstrom (FOCS '12). Next we present a preprocessing that reduces the given instance $(G,S,k)$ to an equivalent instance $(G',S',k')$ where the size of $S'$ is bounded by $\mathcal{O}(k^4)$. These two results lead to a polynomial kernel for SUBSET FEEDBACK VERTEX SET with $\mathcal{O}(k^9)$ vertices.

Coffee break
15:45 - 17:00
##### Session 4B chair: Christophe Paul
4A Thomas Colcombet, Denis Kuperberg, Amaldev Manuel, and Szymon Toruńczyk.
Regular cost functions form a quantitative extension of regular languages that share the array of characterisations the latter possess. In this theory, functions are treated only up to preservation of boundedness on all subsets of the domain. In this work, we subject the well known distance automata (also called min-automata), and their dual max-automata to this framework, and obtain a number of effective characterisations in terms of logic, expressions and algebra
4B Matthias Mnich and Erik Jan van Leeuwen.
We consider the problem to find a set $X$ of vertices (or arcs) with $|X| \leq k$ in a given digraph~$G$ such that $D = G-X$ is an acyclic digraph. In its generality, this is DIRECTED FEEDBACK VERTEX SET or DIRECTED FEEDBACK ARC SET respectively. The existence of a polynomial kernel for these problems is a notorious open problem in the field of kernelization, and little progress has been made. In this paper, we consider both deletion problems with an additional restriction on $D$, namely that $D$ must be an out-forest, an out-tree, or a (directed) pumpkin. Our main results show that for each of these three restrictions the vertex deletion problem remains NP-hard, but we can obtain a kernel with $k^{O(1)}$ vertices on general digraphs $G$. We also show that, in contrast to the vertex deletion problem, the arc deletion problem with each of the above restrictions can be solved in polynomial time.
4A Laure Daviaud, Denis Kuperberg, and Jean-Éric Pin.
Regular cost functions were introduced as a quantitative generalisation of regular languages, retaining many of their equivalent characterisations and decidability properties. For instance, stabilisation monoids play the same role for cost functions as monoids do for regular languages. The purpose of this article is to further extend this algebraic approach by generalising two results on regular languages to cost functions: Eilenbergs varieties theorem and profinite equational characterisations of lattices of regular languages. This opens interesting new perspectives, but the specificities of cost functions introduce difficulties that prevent these generalisations to be straightforward. In contrast, although syntactic algebras can be defined for formal power series over a commutative ring, no such notion is known for series over semirings and in particular over the tropical semiring.
4B Bart M. P. Jansen.
The CONSTRAINED BIPARTITE VERTEX COVER problem asks, for a bipartite graph~$G$ with partite sets~$A$ and~$B$, and integers~$k_A$ and~$k_B$, whether there is a vertex cover for~$G$ containing at most~$k_A$ vertices from~$A$ and~$k_B$ vertices from~$B$. The problem has an easy kernel with~$2 k_A \cdot k_B$ edges and~$4 k_A \cdot k_B$ vertices, based on the fact that every vertex in~$A$ of degree more than~$k_B$ has to be included in the solution, together with every vertex in~$B$ of degree more than~$k_A$. We show that the number of vertices and edges in this kernel are asymptotically essentially optimal in terms of the product~$k_A \cdot k_B$. We prove that if there is a polynomial-time algorithm that reduces any instance~$(G,A,B,k_A,k_B)$ of CONSTRAINED BIPARTITE VERTEX COVER to an equivalent instance~$(G',A',B',k'_A,k'_B)$ such that~$k'_A \in (k_A)^{\mathcal{O}(1)}$,~$k'_B \in (k_B)^{\mathcal{O}(1)}$, and~$|V(G')| \in \mathcal{O}((k_A \cdot k_B)^{1 - \varepsilon})$, for some~$\varepsilon > 0$, then NP~$\subseteq$~coNP$/$poly and the polynomial-time hierarchy collapses. Using a different construction, we prove that if there is a polynomial-time algorithm that reduces any $n$-vertex instance into an equivalent instance (of a possibly different problem) that can be encoded in~$\mathcal{O}(n^{2- \varepsilon})$ bits, then NP~$\subseteq$~coNP$/$poly.
4A Filip Mazowiecki and Cristian Riveros.
Cost register automata (CRA) and its subclass, copyless CRA, were recently proposed by Alur et al. as a new model for computing functions over strings. We study structural properties, expressiveness, and closure properties of copyless CRA. We show that copyless CRA are strictly less expressive than weighted automata and are not closed under reverse operation. To find a better class we impose restrictions on copyless CRA, which ends successfully with a new robust computational model that is closed under reverse and other extensions.
4B Pål Grønås Drange, Markus S. Dregi, Fedor V. Fomin, Stephan Kreutzer, Daniel Lokshtanov, Marcin Pilipczuk, Michał Pilipczuk, Felix Reidl, Fernando Sánchez Villaamil, Saket Saurabh, Sebastian Siebertz, and Somnath Sikdar.
We prove that for every positive integer $r$ and for every graph class $\mathcal{G}$ of bounded expansion, the $r$-DOMINATING SET problem admits a linear kernel on graphs from $\mathcal{G}$. Moreover, in the more general case when $\mathcal{G}$ is only assumed to be nowhere dense, we give an almost linear kernel on $\mathcal{G}$ for the classic DOMINATING SET problem, i.e., for the case $r=1$. These results generalize a line of previous research on finding linear kernels for DOMINATING SET and $r$-DOMINATING SET (Alber et al., JACM 2004, Bodlaender et al., FOCS 2009, Fomin et al., SODA 2010, Fomin et al., SODA 2012, Fomin et al., STACS 2013). However, the approach taken in this work, which is based on the theory of sparse graphs, is radically different and conceptually much simpler than the previous approaches. We complement our findings by showing that for the closely related CONNECTED DOMINATING SET problem, the existence of such kernelization algorithms is unlikely, even though the problem is known to admit a linear kernel on $H$-topological-minor-free graphs (Fomin et al., STACS 2013). Also, we prove that for any somewhere dense class $\mathcal{G}$, there is some $r$ for which $r$-DOMINATING SET is $W[2]$-hard on $\mathcal{G}$. Thus, our results fall short of proving a sharp dichotomy for the parameterized complexity of $r$-DOMINATING SET on subgraph-monotone graph classes: we conjecture that the border of tractability lies exactly between nowhere dense and somewhere dense graph classes.
18:00

09:00 - 10:00
##### Plenary Session: Invited Talk chair: Ioan Todinca
Virginia Vassilevska Williams - Fine-Grained Algorithms and Complexity
A central goal of algorithmic research is to determine how fast computational problems can be solved in the worst case. Theorems from complexity theory state that there are problems that, on inputs of size n, can be solved in $t(n)$ time but not in $t(n)^{1-\epsilon}$ time for $\epsilon>0$. The main challenge is to determine where in this hierarchy various natural and important problems lie. Throughout the years, many ingenious algorithmic techniques have been developed and applied to obtain blazingly fast algorithms for many problems. Nevertheless, for many other central problems, the best known running times are essentially those of the classical algorithms devised for them in the 1950s and 1960s. Unconditional lower bounds seem very difficult to obtain, and so practically all known time lower bounds are conditional. For years, the main tool for proving hardness of computational problems have been NP-hardness reductions, basing hardness on $P\neq NP$. However, when we care about the exact running time (as opposed to merely polynomial vs non-polynomial), NP-hardness is not applicable, especially if the problem can already be solved in polynomial time. In recent years, a new theory has been developed, based on fine-grained reductions'' that focus on exact running times. The goal of these reductions is as follows. Suppose problem A is solvable in $a(n)$ time and problem B in $b(n)$ time, and no $a(n)^{1-\epsilon}$ and $b(n)^{1-\epsilon}$ algorithms are known for A and B respectively, for any $\epsilon>0$. Then if A is fine-grained reducible to problem B (for $a(n)$ and $b(n)$), then a $b(n)^{1-\epsilon}$ time algorithm for B (for any $\epsilon>0$) implies an $a(n)^{1-\epsilon'}$ algorithm for A (for some $\epsilon'>0$). Now, mimicking NP-hardness, the approach is to (1) select a key problem X that is conjectured to require $t(n)^{1-o(1)}$ time for some $t(n)$, and (2) reduce X in a fine-grained way to many important problems. This approach has led to the discovery of many meaningful relationships between problems, and even sometimes to equivalence classes. In this talk I will give an overview of the current progress in this area of study, and will highlight some new exciting developments.
10:20 - 11:35
##### Session 5B chair: Mathieu Liedloff
5A Lin Chen and Guochuan Zhang.
We consider a natural generalization of the classical multiple knapsack problem in which instead of packing single items we are packing groups of items. In this problem, we have multiple knapsacks and a set of items which are partitioned into groups. Each item has an individual weight, while the profit is associated with groups rather than items. The profit of a group can be attained if and only if every item of this group is packed. Such a general model finds applications in various practical problems, e.g., delivering bundles of goods. The tractability of this problem relies heavily on how large a group could be. Deciding if a group of items of total weight $2$ could be packed into two knapsacks of unit capacity is already $NP$-hard and it thus rules out a constant-approximation algorithm for this problem in general. We then focus on the parameterized version where the total weight of items in each group is bounded by a factor $\delta$ of the total capacity of all knapsacks. Both approximation and inapproximability results with respect to $\delta$ are derived. We also show that, depending on whether the number of knapsacks is a constant or part of the input, the approximation ratio for the problem, as a function on $\delta$, changes substantially, which has a clear difference from the classical multiple knapsack problem.
5B Maurice Chandoo.
We compute a canonical circular-arc representation for a given circular-arc (CA) graph which implies solving the isomorphism and recognition problem for this class. To accomplish this we split the class of CA graphs into uniform and non-uniform ones and employ a generalized version of the argument given by Köbler et al.~(2013) that has been used to show that the subclass of Helly CA graphs can be canonized in logspace. For uniform CA graphs our approach works in logspace and in addition to that Helly CA graphs are a strict subset of uniform CA graphs. Thus our result is a generalization of the canonization result for Helly CA graphs. In the non-uniform case a specific set $\Omega$ of ambiguous vertices arises. By choosing the parameter $k$ to be the cardinality of $\Omega$ this obstacle can be solved by brute force. This leads to an $\mathcal{O}( k + \log n)$ space algorithm to compute a canonical representation for non-uniform and therefore all CA graphs.
5A Per Austrin, Petteri Kaski, Mikko Koivisto, and Jesper Nederlof.
The SUBSET SUM problem asks whether a given set of $n$ positive integers contains a subset of elements that sum up to a given target $t$. It is an outstanding open question whether the $\O^*(2^{n/2})$-time algorithm for SUBSET SUM by Horowitz and Sahni [J.~ACM~1974] can be beaten in the worst-case setting by a truly faster'', $\O^*(2^{(0.5-\delta)n})$-time algorithm, with some constant $\delta > 0$. Continuing an earlier work [STACS 2015], we study SUBSET SUM parameterized by the maximum bin size $\beta$, defined as the largest number of subsets of the $n$ input integers that yield the same sum. For every $\epsilon > 0$ we give a truly faster algorithm for instances with $\beta \leq 2^{(0.5-\epsilon)n}$, as well as instances with $\beta \geq 2^{0.661n}$. Consequently, we also obtain a characterization in terms of the popular density parameter $n/\log_2 t$: if all instances of density at least $1.003$ admit a truly faster algorithm, then so does every instance. This goes against the current intuition that instances of density 1 are the hardest, and therefore is a step toward answering the open question in the affirmative. Our results stem from a novel combinatorial analysis of mixings of earlier algorithms for SUBSET SUM and a study of an extremal question in additive combinatorics connected to the problem of Uniquely Decodable Code Pairs in information theory.
5B Fedor V. Fomin, Petr Golovach, Fahad Panolan, and Saket Saurabh.
In the EDGE EDITING TO CONNECTED $f$-DEGREE GRAPH problem we are given a graph $G$, an integer $k$ and a function $f$ assigning integers to vertices of $G$. The task is to decide whether there is a connected graph $F$ on the same vertex set as $G$, such that for every vertex $v$, its degree in $F$ is $f(v)$ and the number of edges in $E(G)\triangle E(F)$, the symmetric difference of $E(G)$ and $E(F)$, is at most $k$. We show that EDGE EDITING TO CONNECTED $f$-DEGREE GRAPH is fixed-parameter tractable (FPT) by providing an algorithm solving the problem on an $n$-vertex graph in time $2^{\mathcal{O}(k)}n^{\mathcal{O}(1)}$. Our FPT algorithm is based on a non-trivial combination of color-coding and fast computations of representative families over direct sum matroid of $\ell$-elongation of co-graphic matroid associated with $G$ and uniform matroid over $\overline{E(G)}$, the set of non-edges of $G$. We believe that this combination could be useful in designing parameterized algorithms for other edge editing problems.
5A Nicolas Auger, Cyril Nicaud, and Carine Pivoteau.
Most modern processors are heavily parallelized and use predictors to guess the outcome of conditional branches, in order to avoid costly stalls in their pipelines. We propose predictor-friendly versions of two classical algorithms: exponentiation by squaring and binary search in a sorted array. These variants result in less mispredictions on average, at the cost of an increased number of operations. These theoretical results are supported by experimentations that show that our algorithms perform significantly better than the standard ones, for primitive data types
5B Michał Pilipczuk and Marcin Wrochna.
Dynamic programming on path and tree decompositions of graphs is a technique that is ubiquit- ous in the field of parameterized and exponential-time algorithms. However, one of its drawbacks is that the space usage is exponential in the decompositions width. Following the work of Allender et al. [Theory of Computing, 14], we investigate whether this space complexity explosion is unavoidable. Using the idea of reparameterization of Cai and Juedes [J. Comput. Syst. Sci., 03], we prove that the question is closely related to a conjecture that the Longest Common Subsequence problem parameterized by the number of input strings does not admit an algorithm that simultaneously uses XP time and FPT space. Moreover, we complete the complexity landscape sketched for pathwidth and treewidth by Allender et al. by considering the parameter tree-depth. We prove that computations on tree-depth decompositions correspond to a model of non-deterministic machines that work in polynomial time and logarithmic space, with access to an auxiliary stack of maximum height equal to the decompositions depth. Together with the results of Allender et al., this describes a hierarchy of complexity classes for polynomial-time non- deterministic machines with different restrictions on the access to working space, which mirrors the classic relations between treewidth, pathwidth, and tree-depth.

Coffee break
11:50 - 12:40
##### Session 6B chair: Mathieu Liedloff
6A Harry Buhrman, Michal Koucký, Bruno Loff, and Florian Speelman.
Catalytic computation, defined by Buhrman, Cleve, Koucký, Loff and Speelman (STOC 2014), is a space-bounded computation where in addition to our working memory we have an exponentially larger auxiliary memory which is full; the auxiliary memory may be used throughout the computation, but it must be restored to its initial content by the end of the computation. Motivated by the surprising power of this model, we set out to study the non-deterministic version of catalytic computation. We establish that non-deterministic catalytic log-space is contained in ZPP, which is the same bound known for its deterministic counterpart, and we prove that non-deterministic catalytic space is closed under complement (under a standard derandom- ization assumption). Furthermore, we establish hierarchy theorems for non-deterministic and deterministic catalytic computation.
6B Spyros Angelopoulos, Christoph Dürr, and Thomas Lidbetter.
We study the problem of searching for a hidden target in an environment that is modeled by an edge-weighted graph. Most of the previous work on this problem considers the pathwise cost formulation, in which the cost incurred by the searcher is the overall time to locate the target, assuming that the searcher moves at unit speed. More recent work introduced the setting of expanding search in which the searcher incurs cost only upon visiting previously unexplored areas of the graph. Such a paradigm is useful in modeling problems in which the cost of re-exploration is negligible (such as coal mining). In our work we study algorithmic and computational issues of expanding search, for a variety of search environments including general graphs, trees and star-like graphs. In particular, we rely on the deterministic and randomized search ratio as the performance measures of search strategies, which were originally introduced by Koutsoupias and Papadimitriou [ICALP 1996] in the context of pathwise search. The search ratio is essentially the best competitive ratio among all possible strategies. Our main objective is to explore how the transition from pathwise to expanding search affects the competitive analysis, which has applications to optimization problems beyond the strict boundaries of search problems.
6A Eugene Asarin, Julien Cervelle, Aldric Degorre, Cătălin Dima, Florian Horn, and Victor Kozyakin.
Two intimately related new classes of games are introduced and studied: entropy games (EGs) and matrix multiplication games (MMGs). An EG is played on a finite arena by two-and-a-half players: Despot, Tribune and the non-deterministic People. Despot wants to make the set of possible Peoples behaviors as small as possible, while Tribune wants to make it as large as possible. An MMG is played by two players that alternately write matrices from some predefined finite sets. One wants to maximize the growth rate of the product, and the other to minimize it. We show that in general MMGs are undecidable in quite a strong sense. On the positive side, EGs correspond to a subclass of MMGs, and we prove that such MMGs and EGs are determined, and that the optimal strategies are simple. The complexity of solving such games is in NP&#8745;coNP.
6B Stefan Fafianie, Stefan Kratsch, and Vuong Anh Quyen.
In this work we study preprocessing for tractable problems when part of the input is unknown or uncertain. This comes up naturally if, e.g., the load of some machines or the congestion of some roads is not known far enough in advance, or if we have to regularly solve a problem over instances that are largely similar, e.g., daily airport scheduling with few charter flights. Unlike robust optimization, which also studies settings like this, our goal lies not in computing solutions that are (approximately) good for every instantiation. Rather, we seek to preprocess the known parts of the input, to speed up finding an optimal solution once the missing data is known. We present efficient algorithms that given an instance with partially uncertain input generate an instance of size polynomial in the amount of uncertain data that is equivalent for every instantiation of the unknown part. Concretely, we obtain such algorithms for minimum spanning tree, minimum weight matroid basis, and maximum cardinality bipartite matching, where respectively the weight of edges, weight of elements, and the availability of vertices is unknown for part of the input. Furthermore, we show that there are tractable problems, such as small connected vertex cover, for which one cannot hope to obtain similar results.

Lunch
14:30 - 15:45
##### Session 7B chair: Matthias Mnich
7A Neeraj Kayal, Vineet Nair, and Chandan Saha.
We show an exponential separation between two well-studied models of algebraic computation, namely read-once oblivious algebraic branching programs (ROABPs) and multilinear depth three circuits. In particular we show the following: 1. There exists an explicit $n$-variate polynomial computable by linear sized multilinear depth three circuits (with only two product gates) such that every ROABP computing it requires $2^{\Omega(n)}$ size. 2. Any multilinear depth three circuit computing IMM$_{n,d}$ (the iterated matrix multiplication polynomial formed by multiplying $d$, $n \times n$ symbolic matrices) has $n^{\Omega(d)}$ size. IMM$_{n,d}$ can be easily computed by a poly($n,d$) sized ROABP. 3. Further, the proof of 2 yields an exponential separation between multilinear depth four and multilinear depth three circuits: There is an explicit $n$-variate, degree $d$ polynomial computable by a poly($n,d$) sized multilinear depth four circuit such that any multilinear depth three circuit computing it has size $n^{\Omega(d)}$. This improves upon the quasi-polynomial separation result by Raz and Yehudayoff [2009] between these two models. The hard polynomial in 1 is constructed using a novel application of expander graphs in conjunction with the evaluation dimension measure used previously in Nisan [1991], Raz [2006,2009], Raz and Yehudayoff [2009], and Forbes and Shpilka [2013], while 2 is proved via a new adaptation of the dimension of the partial derivatives measure used by Nisan and Wigderson [1997]. Our lower bounds hold over any field.
7B Harald Räcke and Richard Stotz.
We present approximation algorithms for balanced partitioning problems. These problems are notoriously hard and we present new bicriteria approximation algorithms, that approximate the optimal cost and relax the balance constraint. In the first scenario, we consider Min-Max $k$-Partitioning, the problem of dividing a graph into $k$ equal-sized parts while minimizing the maximum cost of edges cut by a single part. Our approximation algorithm relaxes the size of the parts by $(1+\varepsilon)$ and approximates the optimal cost by $\mathcal{O}(\log^{1.5} n \log\log n)$, for every $0 < \varepsilon < 1$. This is the first nontrivial algorithm for this problem that relaxes the balance constraint by less than 2. In the second scenario, we consider strategies to find a minimum-cost mapping of a graph of processes to a hierarchical network with identical processors at the leaves. This Hierarchical Graph Partitioning problem has been studied recently by Hajiaghayi et al. who presented an $(\mathcal{O}(\log n),(1+\varepsilon)(h+1))$ approximation algorithm for constant network heights $h$. We use spreading metrics to give an improved $(\mathcal{O}(\log n),(1+\varepsilon)h)$ approximation algorithm that runs in polynomial time for arbitrary network heights.
7A John M. Hitchcock and Hadi Shafei.
We study the polynomial-time autoreducibility of NP-complete sets and obtain separations under strong hypotheses for NP. Assuming there is a p-generic set in NP, we show the following: - For every $k \geq 2$, there is a $k$-T-complete set for NP that is $k$-T autoreducible, but is not $k$-tt autoreducible or $(k-1)$-T autoreducible. - For every $k \geq 3$, there is a $k$-tt-complete set for NP that is $k$-tt autoreducible, but is not $(k-1)$-tt autoreducible or $(k-2)$-T autoreducible. - There is a tt-complete set for NP that is tt-autoreducible, but is not btt-autoreducible. Under the stronger assumption that there is a p-generic set in $\operatorname{NP}\cap\operatorname{coNP}$, we show: - For every $k \geq 2$, there is a $k$-tt-complete set for NP that is $k$-tt autoreducible, but is not $(k-1)$-T autoreducible. Our proofs are based on constructions from separating NP-completeness notions. For example, the construction of a 2-T-complete set for NP that is not 2-tt-complete also separates 2-T-autoreducibility from 2-tt-autoreducibility.
We introduce a new framework of Airport and Railway Problems, which combines capacitated facility location with network design. In this framework we are given a graph with weights on the vertices and on the edges, together with a parameter $k$. The vertices of the graph represent cities, and weights denote respectively the costs of opening airports in the cities and building railways that connect pairs of cities. The parameter $k$ can be thought of as the capacity of an airport. The goal is to construct a minimum cost network of airports and railways connecting the cities, where each connected component in the network spans at most $k$ vertices, contains an open airport, and the network satisfies some additional requirements specific to the problem in the framework. We consider two problems in this framework. In the AR_F problem there are no additional requirements for the network. This problem is related to capacitated facility location. In the AR_P problem, we require each component to be a path with airports at both endpoints. AR_P is a relaxation of the capacitated vehicle routing problem (CVRP). We consider the problems in the two-dimensional Euclidean setting. We show that both AR_F and AR_P are NP-hard, even for uniform vertex weights (i.e., when the cost of building an airport is the same for all cities). On the positive side, we provide polynomial time approximation schemes for AR_F and AR_P when vertex weights are uniform. We also investigate AR_F and AR_P for $k = \infty$. In this setting we present an exact polynomial time algorithm for AR_F with general vertex costs, which also works for general edge costs. In contrast to AR_F, AR_P remains NP-hard when $k=\infty$, and we present a polynomial time approximation scheme for general vertex weights. We believe that our PTAS for AR_P with uniform vertex weights and arbitrary $k$ brings us closer towards a PTAS for Euclidean CVRP, for which the main difficulty is to deal with paths of length at most $k$.
7A Arnaud Mary and Yann Strozecki.
In this paper we address the problem of generating all elements obtained by the saturation of an initial set by some operations. More precisely, we prove that we can generate the closure by polymorphisms of a boolean relation with a polynomial delay. Therefore we can compute with polynomial delay the closure of a family of sets by any set of set operations'' (e.g. by union, intersection, difference, symmetric difference$\dots$). To do so, we prove that for any set of operations $\mathcal{F}$, one can decide in polynomial time whether an element belongs to the closure by $\mathcal{F}$ of a family of sets. When the relation is over a domain larger than two elements, we prove that our generic enumeration method fails, since the associated decision problem is NP-hard.
7B Pinyan Lu, Kuan Yang, and Chihao Zhang.
Hardcore and Ising models are two most important families of two state spin systems in statistic physics. Partition function of spin systems is the center concept in statistic physics which connects microscopic particles and their interactions with their macroscopic and statistical properties of materials such as energy, entropy, ferromagnetism, etc. If each local interaction of the system involves only two particles, the system can be described by a graph. In this case, fully polynomial-time approximation scheme (FPTAS) for computing the partition function of both hardcore and anti-ferromagnetic Ising model was designed up to the uniqueness condition of the system. These result are the best possible since approximately computing the partition function beyond this threshold is NP-hard. In this paper, we generalize these results to general physics systems, where each local interaction may involves multiple particles. Such systems are described by hypergraphs. For hardcore model, we also provide FPTAS up to the uniqueness condition, and for anti-ferromagnetic Ising model, we obtain FPTAS under a slightly stronger condition.

Coffee break
16:00 - 17:15
##### Session 8B chair: Matthias Mnich
8A Manuel Bodirsky, Peter Jonsson, and Trung Van Pham.
We systematically study the computational complexity of a broad class of computational problems in phylogenetic reconstruction. The class contains for example the rooted triple consistency problem, forbidden subtree problems, the quartet consistency problem, and many other problems studied in the bioinformatics literature. The studied problems can be described as constraint satisfaction problems where the constraints have a first-order definition over the rooted triple relation. We show that every such phylogeny problem can be solved in polynomial time or is NP-complete. On the algorithmic side, we generalize a well-known polynomial-time algorithm of Aho, Sagiv, Szymanski, and Ullman for the rooted triple consistency problem. Our algorithm repeatedly solves linear equation systems to construct a solution in polynomial time. We then show that every phylogeny problem that cannot be solved by our algorithm is NP-complete. Our classification establishes a dichotomy for a large class of infinite structures that we believe is of independent interest in universal algebra, model theory, and topology. The proof of our main result combines results and techniques from various research areas: a recent classification of the model-complete cores of the reducts of the homogeneous binary branching C-relation, Leebs Ramsey theorem for rooted trees, and universal algebra.
8B Mikkel Abrahamsen, Greg Bodwin, Eva Rotenberg, and Morten Stöckel.
Graph reconstruction algorithms seek to learn a hidden graph by repeatedly querying a black-box oracle for information about the graph structure. Perhaps the most well studied and applied version of the problem uses a distance oracle, which can report the shortest path distance between any pair of nodes. We introduce and study the betweenness oracle, where $\operatorname{bet}(a, m, z)$ is true iff $m$ lies on a shortest path between $a$ and $z$. This oracle is strictly weaker than a distance oracle, in the sense that a betweenness query can be simulated by a constant number of distance queries, but not vice versa. Despite this, we are able to develop betweenness reconstruction algorithms that match the current state of the art for distance reconstruction, and even improve it for certain types of graphs. We obtain the following algorithms: - Reconstruction of general graphs in $O(n^2)$ queries - Reconstruction of degree-bounded graphs in $\tilde{O}(n^{3/2})$ queries - Reconstruction of geodetic degree-bounded graphs in $\tilde{O}(n)$ queries In addition to being a fundamental graph theoretic problem with some natural applications, our new results shed light on some avenues for progress in the distance reconstruction problem.
8A Markus Lohrey and Georg Zetzsche.
It is shown that the knapsack problem, which was introduced by Myasnikov et al. for arbitrary finitely generated groups, can be solved in NP for graph groups. This result even holds if the group elements are represented in a compressed form by SLPs, which generalizes the classical NP-completeness result of the integer knapsack problem. We also prove general transfer results: NP-membership of the knapsack problem is passed on to finite extensions, HNN-extensions over finite associated subgroups, and amalgamated products with finite identified subgroups.
8B Shiri Chechik, Haim Kaplan, Mikkel Thorup, Or Zamir, and Uri Zwick.
Gabow and Tarjan showed that the Bottleneck Path (BP) problem, i.e., finding a path between a given source and a given target in a weighted directed graph whose largest edge weight is minimized, as well as the Bottleneck spanning tree (BST) problem, i.e., finding a directed spanning tree rooted at a given vertex whose largest edge weight is minimized, can both be solved deterministically in $O(m\log^*n)$ time, where $m$ is the number of edges and $n$ is the number of vertices in the graph. We present a slightly improved randomized algorithm for these problems with an expected running time of $O(m\beta(m,n))$, where $\beta(m,n)=\min\{k\ge 1 \mid \log^{(k)}n \le \frac{m}{n}\} \le \log^*n - \log^*({m}/{n})+1$. This is the first improvement for these problems in over 25 years. In particular, if $m\ge n\log^{(k)}n$, for some constant $k$, the expected running time of the new algorithm is $O(m)$. Our algorithm, as that of Gabow and Tarjan, work in the comparison model. We also observe that in the word-RAM model, both problems can be solved deterministically in $O(m)$ time. Finally, we solve an open problem of Andersson et al., giving a deterministic $O(m)$-time comparison-based algorithm for solving deterministic 2-player turn-based zero-sum terminal payoff games, also known as Deterministic Graphical Games (DGG).
8A Vittorio Bilò and Marios Mavronicolas.
[Schaefer and Stefankovic, Theory of Computing Systems, 2015] provided an explicit formulation of $\exists {\mathbb{R}}$ as the class capturing the complexity of deciding the Existential Theory of the Reals, and established that deciding, given a 3-player game, whether or not it has a Nash equilibrium with no probability exceeding a given rational is $\exists {\mathbb{R}}$-complete. Four more decision problems about Nash equilibria for $3$-player games were very recently shown $\exists {\mathbb{R}}$-complete via a chain of individual, problem-specific reductions in [Garg et al., Proceedings of ICALP 2015]; determining more such $\exists {\mathbb{R}}$-complete problems was posed there as an open problem. In this work, we deliver an extensive catalog of $\exists {\mathbb{R}}$-complete decision problems about Nash equilibria in 3-player games, thus resolving completely the open problem from [Garg et al., Proceedings of ICALP 2015]. Towards this end, we present a single and very simple, unifying reduction from the $\exists {\mathbb{R}}$-complete decision problem from [Schaefer and Stefankovic, Theory of Computing Systems, 2015] to (almost) all the decision problems about Nash equilibria that were before shown ${\mathcal{NP}}$-complete for $2$-player games in [Bilo and Mavronicolas, Proceedings of SAGT 2012; Conitzer and Sandholm, Games and Economic Behavior, 2008; Gilboa and Zemel, Games and Economic Behavior, 1989]. Encompassed in the catalog are the four decision problems shown $\exists {\mathbb{R}}$-complete in [Garg et al., Proceedings of ICALP 2015].
8B Davide Bilò, Luciano Gualà, Stefano Leucci, and Guido Proietti.
Let $G$ be an $n$-node and $m$-edge positively real-weighted undirected graph. For any given integer $f \ge 1$, we study the problem of designing a sparse f-edge-fault-tolerant ($f$-EFT) $\sigma$-approximate single-source shortest-path tree ($\sigma$-ASPT), namely a subgraph of $G$ having as few edges as possible and which, following the failure of a set $F$ of at most $f$ edges in $G$, contains paths from a fixed source that are stretched at most by a factor of $\sigma$. To this respect, we provide an algorithm that efficiently computes an $f$-EFT $(2|F|+1)$-ASPT of size $O(f n)$. Our structure improves on a previous related construction designed for unweighted graphs, having the same size but guaranteeing a larger stretch factor of $3(f+1)$, plus an additive term of $(f+1) \log n$. Then, we show how to convert our structure into an efficient $f$-EFT single-source distance oracle (SSDO), that can be built in $\widetilde{O}(f m)$ time, has size $O(fn \log^2 n)$, and is able to report, after the failure of the edge set $F$, in $O(|F|^2 \log^2 n)$ time a $(2|F|+1)$-approximate distance from the source to any node, and a corresponding approximate path in the same amount of time plus the path's size. Such an oracle is obtained by handling another fundamental problem, namely that of updating a minimum spanning forest (MSF) of $G$ after that a batch of $k$ simultaneous edge modifications (i.e., edge insertions, deletions and weight changes) is performed. For this problem, we build in $O(m \log^3 n)$ time a sensitivity oracle of size $O(m \log^2 n)$, that reports in $O(k^2 \log^2 n)$ time the (at most $2k$) edges either exiting from or entering into the MSF. As a result of independent interest, it is worth noticing that our MSF oracle can be employed to handle arbitrary sequences of $o(\sqrt[4]{n}/\log n)$ (non-simultaneous) updates with a worst-case time per update of $o(\sqrt{n})$. Thus, for relatively short sequences of updates, our oracle should be preferred w.r.t. the best-known (in a worst-case sense) MSF fully-dynamic algorithm, requiring $O(\sqrt{n})$ time per update.
20:00

09:00 - 10:00
##### Plenary Session: Invited Talk chair: Nicolas Ollinger
Jérôme Leroux - Ideal Decompositions for Vector Addition Systems
Vector addition systems, or equivalently Petri nets, are one of the most popular formal models for the representation and the analysis of parallel processes. Many problems for vector addition systems are known to be decidable thanks to the theory of well-structured transition systems. Indeed, vector addition systems with configurations equipped with the classical point-wise ordering are well-structured transition systems. Based on this observation, problems like coverability or termination can be proven decidable. However, the theory of well-structured transition systems does not explain the decidability of the reachability problem. In this presentation, we show that runs of vector addition systems can also be equipped with a well quasi-order. This observation provides a unified understanding of the data structures involved in solving many problems for vector addition systems, including the central reachability problem.

Coffee break
10:20 - 11:35
##### Session 9B chair: Martin Dietzfelbinger
9A Mikołaj Bojańczyk, Paweł Parys, and Szymon Toruńczyk.
We consider the logic MSO+U, which is monadic second-order logic extended with the unbounding quantifier. The unbounding quantifier is used to say that a property of finite sets holds for sets of arbitrarily large size. We prove that the logic is undecidable on infinite words, i.e. the MSO+U theory of $(\Nat,\le)$ is undecidable. This settles an open problem about the logic, and improves a previous undecidability result, which used infinite trees and additional axioms from set theory
9B Gerth Stølting Brodal.
An external memory data structure is presented for maintaining a dynamic set of $N$ two-dimensional points under the insertion and deletion of points, and supporting unsorted 3-sided range reporting queries and top-$k$ queries, where top-$k$ queries report the $k$~points with highest $y$-value within a given $x$-range. For any constant $0<\varepsilon\leq \frac{1}{2}$, a data structure is constructed that supports updates in amortized $O(\frac{1}{\varepsilon B^{1-\varepsilon}}\log_B N)$ IOs and queries in amortized $O(\frac{1}{\varepsilon}\log_B N+K/B)$ IOs, where $B$ is the external memory block size, and $K$ is the size of the output to the query (for top-$k$ queries $K$ is the minimum of $k$ and the number of points in the query interval). The data structure uses linear space. The update bound is a significant factor $B^{1-\varepsilon}$ improvement over the previous best update bounds for these two query problems, while staying within the same query and space bounds.
9A Achim Blumensath, Thomas Colcombet, and Paweł Parys.
We prove that satisfiability over infinite words is decidable for a fragment of asymptotic monadic second-order logic. In this fragment we only allow formulae of the form $\exists t\forall s\exists r\,\phi(r,s,t)$, where $\phi$ does not use quantifiers over number variables, and variables $r$ and $s$ can be only used simultaneously, in subformulae of the form $s<f(x)\leq r$.
9B Clément L. Canonne, Ilias Diakonikolas, Themis Gouleakis, and Ronitt Rubinfeld.
We study the question of testing structured properties (classes) of discrete distributions. Specifically, given sample access to an arbitrary distribution $D$ over $[n]$ and a property $\mathcal{P}$, the goal is to distinguish between $D\in\mathcal{P}$ and $\ell_{1}(\D,\mathcal{P})>\varepsilon$. We develop a general algorithm for this question, which applies to a large range of shape-constrained'' properties, including monotone, log-concave, $t$-modal, piecewise-polynomial, and Poisson Binomial distributions. Moreover, for all cases considered, our algorithm has near-optimal sample complexity with regard to the domain size and is computationally efficient. For most of these classes, we provide the first non-trivial tester in the literature. In addition, we also describe a generic method to prove lower bounds for this problem, and use it to show our upper bounds are nearly tight. Finally, we extend some of our techniques to tolerant testing, deriving nearly--tight upper and lower bounds for the corresponding questions.
9A Bernhard Gittenberger and Zbigniew Gołębiewski.
John Tromp introduced the so-called 'binary lambda calculus' as a way to encode lambda terms in terms of $0-1$-strings. Later, Grygiel and Lescanne conjectured that the number of binary lambda terms with $m$ free indices and of size $n$ (encoded as binary words of length $n$) is $o(n^{-3/2} \tau^{-n})$ for $\tau \approx 1.963448\ldots$. We generalize the proposed notion of size and show that for several classes of lambda terms, including binary lambda terms with $m$ free indices, the number of terms of size $n$ is $\mathrm{\Theta}\left(n^{-3/2} \rho^{-n}\right)$ with some class dependent constant $\rho$, which in particular disproves the above mentioned conjecture. A way to obtain lower and upper bounds for the constant near the leading term is presented and numerical results for a few previously introduced classes of lambda terms are given.
9B Sang Won Bae, Matias Korman, Joseph S.B. Mitchell, Yoshio Okamoto, Valentin Polishchuk, and Haitao Wang.
For a polygonal domain with $h$ holes and a total of $n$ vertices, we present algorithms that compute the $L_1$ geodesic diameter in $O(n^2+h^4)$ time and the $L_1$ geodesic center in $O((n^4+n^2 h^4)\alpha(n))$ time, where $\alpha(\cdot)$ denotes the inverse Ackermann function. No algorithms were known for these problems before. For the Euclidean counterpart, the best algorithms compute the geodesic diameter in $O(n^{7.73})$ or $O(n^7(h+\log n))$ time, and compute the geodesic center in $O(n^{12+\epsilon})$ time. Therefore, our algorithms are much faster than the algorithms for the Euclidean problems. Our algorithms are based on several interesting observations on $L_1$ shortest paths in polygonal domains.

Coffee break
11:50 - 12:40
##### Session 10B chair: Nicolas Ollinger
10A Olaf Beyersdorff, Leroy Chew, Meena Mahajan, and Anil Shukla.
The groundbreaking paper Short proofs are narrow  resolution made simple by Ben-Sasson and Wigderson (J. ACM 2001) introduces what is today arguably the main technique to obtain resolution lower bounds: to show a lower bound for the width of proofs. Another important measure for resolution is space, and in their fundamental work, Atserias and Dalmau (J. Comput. Syst. Sci. 2008) show that space lower bounds again can be obtained via width lower bounds. Here we assess whether similar techniques are effective for resolution calculi for quantified Boolean formulas (QBF). A mixed picture emerges. Our main results show that both the relations between size and width as well as between space and width drastically fail in Q-resolution, even in its weaker tree-like version. On the other hand, we obtain positive results for the expansion-based resolution systems &#8704;Exp+Res and IR-calc, however only in the weak tree-like models. Technically, our negative results rely on showing width lower bounds together with simultaneous upper bounds for size and space. For our positive results we exhibit space and width-preserving simulations between QBF resolution calculi.
10B Alexey Milovanov.
Algorithmic statistics considers the following problem: given a binary string $x$ (e.g., some experimental data), find a good'' explanation of this data. It uses algorithmic information theory to define formally what is a good explanation. In this paper we extend this framework in two directions. First, the explanations are not only interesting in themselves but also used for prediction: we want to know what kind of data we may reasonably expect in similar situations (repeating the same experiment). We show that some kind of hierarchy can be constructed both in terms of algorithmic statistics and using the notion of a priori probability, and these two approaches turn out to be equivalent (Theorem 5). Second, a more realistic approach that goes back to machine learning theory, assumes that we have not a single data string $x$ but some set of positive examples'' $x_1,\ldots,x_l$ that all belong to some unknown set $A$, a property that we want to learn. We want this set $A$ to contain all positive examples and to be as small and simple as possible. We show how algorithmic statistic can be extended to cover this situation (Theorem 11).
10A Yuval Filmus, Pavel Hrubeš, and Massimo Lauria.
In this paper, we compare the strength of the semantic and syntactic version of the cutting planes proof system. First, we show that the lower bound technique of Pudlák applies also to semantic cutting planes: the proof system has feasible interpolation via monotone real circuits, which gives an exponential lower bound on lengths of semantic cutting planes refutations. Second, we show that semantic refutations are stronger than syntactic ones. In particular, we give a formula for which any refutation in syntactic cutting planes requires exponential length, while there is a polynomial length refutation in semantic cutting planes. In other words, syntactic cutting planes does not p-simulate semantic cutting planes. We also give two incompatible integer inequalities which require exponential length refutation in syntactic cutting planes. Finally, we pose the following problem, which arises in connection with semantic inference of arity larger than two: can every multivariate non-decreasing real function be expressed as a composition of non-decreasing real functions in two variables?
10B Timo Kötzing and Martin Schirneck.
A major part of our knowledge about Computational Learning stems from comparisons of the learning power of different learning criteria. These comparisons inform about trade-offs between learning restrictions and, more generally, learning settings; furthermore, they inform about what restrictions can be observed without losing learning power. With this paper we propose that one main focus of future research in Computational Learning should be on a structured approach to determine the relations of different learning criteria. In particular, we propose that, for small sets of learning criteria, all pairwise relations should be determined; these relations can then be easily depicted as a map, a diagram detailing the relations. Once we have maps for many relevant sets of learning criteria, the collection of these maps is an Atlas of Computational Learning Theory, informing at a glance about the landscape of computational learning just as a geographical atlas informs about the earth. In this paper we work toward this goal by providing three example maps, one pertaining to partially set-driven learning, and two pertaining to strongly monotone learning. These maps can serve as blueprints for future maps of similar base structure.

Lunch