pith. machine review for the scientific record. sign in

cs.DM

Discrete Mathematics

Covers combinatorics, graph theory, applications of probability. Roughly includes material in ACM Subject Classes G.2 and G.3.

0
cs.DM 2026-05-13 Recognition

One central variable turns linear ascents exponential

Binary constraints on one additional variable can create exponential ascents

A star of 2n triangles on 4n+1 Boolean variables forces ascents of length 10*2^n - 9 with treedepth 3 and feedback vertex set size 1.

abstract click to expand
Local search in combinatorial optimisation can be viewed as an uphill climb on a corresponding fitness landscape, where the assignments visited by a strict local search follow an ascent in the landscape. This hill-climbing is sometimes surprisingly efficient, but not always. Since fitness landscapes can be succinctly represented by valued constraint satisfaction problems (VCSPs), it is natural to ask: what properties of VCSPs ensure that all ascents are polynomial? Or alternatively, what are the ``simplest'' VCSPs with exponential ascents? Prior examples of VCSPs with long ascents were built up as a chain of gadgets of constraints. Here we give a simpler star of gadgets construction by gluing 2n triangles of constraints at a common centre variable. We obtain a binary VCSP on 4n + 1 Boolean variables with an exponential ascent of length 10*2^n - 9. The variable at the centre of our construction intertwines two sublandscapes with only linear ascents into one with exponential ascents. The VCSP that we construct is significantly simpler than prior constructions in terms of treedepth (reducing \Omega(log n) to 3) and feedback vertex set number (reducing \Omega(n) to 1). We discuss the consequences of this simplicity for the parameterized complexity of local search.
0
0
cs.DM 2026-05-12 2 theorems

Paths maximize zero forcing sets among distance-hereditary graphs

The Path-Extremal Conjecture for Zero Forcing: Distance-Hereditary Graphs and a Split-Decomposition Reduction

Proof covers trees and cographs and reduces the problem to verifying split-prime graphs of bounded size.

abstract click to expand
For an $n$-vertex graph $G$, let $z(G;k)$ denote the number of zero forcing sets of size $k$. A conjecture of Boyer et al. asserts that the path $P_n$ maximizes these numbers coefficientwise among all $n$-vertex graphs; equivalently, the zero forcing polynomial of every $n$-vertex graph should be coefficientwise dominated by that of $P_n$. We prove this path-extremal conjecture for distance-hereditary graphs. This extends the previously known tree case to a much larger class that includes, in particular, all trees and all cographs. We then use canonical split decomposition to push the argument one step beyond the distance-hereditary setting. Specifically, we show that if a split-prime graph $H$ and all of its induced subgraphs are path-extremal, then every connected graph whose canonical split decomposition has a unique prime bag whose label graph is isomorphic to $H$ is also path-extremal. As a corollary, for each fixed $m$, if every induced subgraph of every split-prime graph on at most $m$ vertices is path-extremal, then so is every connected graph whose canonical split decomposition has a unique prime bag of size at most $m$. Thus, on these classes, the conjecture reduces to a finite verification problem on bounded-order prime cores. Our proofs combine two counting mechanisms for non-forcing sets -- fort obstructions arising from twin pairs and a leaf recurrence -- with the accessibility description of graph-labelled trees in the canonical split decomposition. This yields a new positive instance of the path-extremal conjecture and identifies a natural structural frontier for further progress.
0
0
cs.DM 2026-05-11 2 theorems

Bounded carving width orders Eulerian digraphs by strong immersion

Well-Quasi-Ordering Eulerian Digraphs: Bounded Carving Width

Classes with this bound, even with vertex labels and routing rules, contain no infinite antichains under the immersion relation.

Figure from the paper full image
abstract click to expand
We prove that every class of Eulerian directed graphs of bounded carving width (equivalently of bounded degree and treewidth) is well-quasi-ordered by strong immersion. In fact, we prove a stronger result, namely that every class of Eulerian directed graphs of bounded carving width, where every vertex is additionally labeled from a well-quasi-order, fixes a linear order on its incident edges, and may impose further restrictions on how the immersion is allowed to route paths through it, is well-quasi-ordered by an adequate notion of strong immersion. To this extent, we develop a framework seemingly suited to prove well-quasi-ordering for classes of Eulerian directed graphs by (strong) immersion and present a first meta theorem in that direction. We complement our results by observing that the class of Eulerian directed graphs of unbounded degree is \emph{not} well-quasi-ordered by \emph{strong} immersion, even if we assume the treewidth of the class to be at most two. We conclude with a dichotomy result, proving for a very restricted class of Eulerian directed graphs of unbounded degree that it is not well-quasi-ordered by strong immersion, but it is well-quasi-ordered by weak immersion.
0
0
cs.DM 2026-05-08 Recognition

Mutations let relaxed QUBO gradients beat heuristics on large graphs

Mutation-Guided Differentiable Quadratic Combinatorial Optimization

Escaping local maxima via targeted resets improves solution quality without heavy parallel GPU starts for MaxCut and independent set.

Figure from the paper full image
abstract click to expand
Recent studies suggest that gradient-based methods applied to relaxed box-constrained Quadratic Unconstrained Binary Optimization (QUBO) formulations can outperform classical heuristics in some large-scale regimes, often relying on heavy parallelization. However, these methods still underperform heuristics in other settings. In this work, we clarify this apparent discrepancy through a detailed analysis of the relaxed non-convex QUBO local maxima for both the Maximum Independent Set (MIS) and Maximum Cut (MaxCut) problems, and by introducing a new quadratic objective for MaxCut. Motivated by this analysis, we propose a mutation-based differentiable global reset algorithm, combined with local search to escape local maxima. We term our approach mQO, standing for mutation-based Quadratic combinatorial Optimization. The proposed strategy dramatically improves the performance of gradient-based solvers without heavy reliance on GPU parallelized initializations, indicating that stalling, rather than model capacity or compute, is the dominant bottleneck. As a result, on large-scale graphs, mQO achieves superior performance against state-of-the-art heuristics, commercial integer programming solvers, and recent GPU methods.
0
0
cs.DM 2026-05-08

Minor-closed classes get (1+o(1))log n adjacency labels

Adjacency labelling for proper minor-closed graph classes

A universal host graph of size n to the 1+o(1) contains every n-vertex member as an induced subgraph.

Figure from the paper full image
abstract click to expand
We show that every proper minor-closed class of graphs admits a $(1+o(1))\log_2 n$-bit adjacency labelling scheme. Equivalently, for every proper minor-closed class $\mathcal{G}$ and every positive integer $n$ there exists an $n^{1+o(1)}$-vertex graph $U$ such that every $n$-vertex graph in $\mathcal{G}$ is isomorphic to an induced subgraph of $U$. Both results are optimal up to the lower order term.
0
0
cs.DM 2026-05-08 2 theorems

Proper minor-closed graphs admit (1+o(1))log n-bit adjacency labels

Adjacency labelling for proper minor-closed graph classes

Every n-vertex graph from the class is an induced subgraph of one n^{1+o(1)}-vertex host graph.

Figure from the paper full image
abstract click to expand
We show that every proper minor-closed class of graphs admits a $(1+o(1))\log_2 n$-bit adjacency labelling scheme. Equivalently, for every proper minor-closed class $\mathcal{G}$ and every positive integer $n$ there exists an $n^{1+o(1)}$-vertex graph $U$ such that every $n$-vertex graph in $\mathcal{G}$ is isomorphic to an induced subgraph of $U$. Both results are optimal up to the lower order term.
0
0
cs.DM 2026-05-07

The paper develops Markov chain methods for sampling uniform simultaneous edge-colorings…

Sampling Simultaneous Edge-Colorings

New weighted Hamming distance yields O(m log n) mixing for Glauber and local flip dynamics on simultaneous edge-colorings when k > (6+Ξ΄)Δ…

abstract click to expand
We study the sampling problem for simultaneous edge colorings. Given a pair of graphs $G_1=(V,E_1)$ and $G_2=(V,E_2)$ which are on the same vertex set $V$, a simultaneous edge coloring is an edge coloring of $G_1\cup G_2$ so that each of the individual graphs is properly colored. When each of $G_1$ and $G_2$ are of maximum degree $\Delta$, then it is conjectured that $\Delta+2$ colors suffice, and recent work asymptotically establishes the conjecture. We study Markov chains for randomly sampling from the uniform distribution over simultaneous edge colorings. Straightforward applications of Jerrum's classical coupling argument establish rapid mixing of the Glauber dynamics on the corresponding line graph when $k>8\Delta$. We present a simple weighted Hamming distance for which Jerrum's coupling yields optimal mixing time (up to constant factors) of $O(m\log{n})$ when $k>(6+\delta)\Delta$ for any fixed $\delta>0$. Moreover, utilizing the flip dynamics with our new metric, we obtain $O(m\log{n})$ mixing of the flip dynamics with a local choice of flip parameters, which flips only bounded-size components, when $k\geq 5.95\Delta$. The proof adapts previous coupling analyses for the flip dynamics to the setting of simultaneous edge colorings.
0
0
cs.DM 2026-05-06

Algorithms build networks from required and forbidden LCA rules

Inferring Phylogenetic Networks from Required and Forbidden LCA-Constraints

Exact characterizations for three avoidance variants yield polynomial-time decisions and explicit constructions.

Figure from the paper full image
abstract click to expand
Phylogenetic networks provide a framework for representing evolutionary histories involving reticulate events such as hybridization or horizontal gene transfer. A central problem is to infer such networks from local structural information. In this paper, we study network inference from least common ancestor (LCA) constraints, which specify relative ancestral relationships between pairs of taxa. While previous work has characterized when a set of required LCA constraints can be realized by a phylogenetic network, practical applications may also involve constraints that must be explicitly avoided, for example due to biological prior knowledge. We therefore consider the realization problem for pairs $(R,F)$, where $R$ is a set of required LCA-constraints and $F$ is a set of forbidden ones. Since there are several natural ways to formalize what it means for a network to avoid a forbidden LCA-constraint, we study three such variants. For each of them, we characterize exactly when there exists a phylogenetic network that realizes all constraints in $R$ while avoiding all constraints in $F$ in the respective sense. Based on these characterizations, we derive polynomial-time algorithms that decide the existence of such networks and construct one whenever it exists.
0
0
cs.DM 2026-05-06 Recognition

Algorithms decide if required and forbidden LCA constraints fit a phylogenetic network

Inferring Phylogenetic Networks from Required and Forbidden LCA-Constraints

Characterizations of three avoidance variants yield fast checks and constructions for evolutionary histories with reticulation.

Figure from the paper full image
abstract click to expand
Phylogenetic networks provide a framework for representing evolutionary histories involving reticulate events such as hybridization or horizontal gene transfer. A central problem is to infer such networks from local structural information. In this paper, we study network inference from least common ancestor (LCA) constraints, which specify relative ancestral relationships between pairs of taxa. While previous work has characterized when a set of required LCA constraints can be realized by a phylogenetic network, practical applications may also involve constraints that must be explicitly avoided, for example due to biological prior knowledge. We therefore consider the realization problem for pairs $(R,F)$, where $R$ is a set of required LCA-constraints and $F$ is a set of forbidden ones. Since there are several natural ways to formalize what it means for a network to avoid a forbidden LCA-constraint, we study three such variants. For each of them, we characterize exactly when there exists a phylogenetic network that realizes all constraints in $R$ while avoiding all constraints in $F$ in the respective sense. Based on these characterizations, we derive polynomial-time algorithms that decide the existence of such networks and construct one whenever it exists.
0
0
cs.DM 2026-05-06

Restricted Dyck paths yield new Catalan identity

An Identity for Catalan Numbers via Restricted Dyck Paths

Paths of height at most h without k-1 consecutive valleys at the top level produce an algebraic equality among Catalan numbers not recorded

abstract click to expand
Catalan numbers and their interpretations in terms of Dyck paths are widely used in different topics of applied mathematics and computer science. Here, we consider a general approach for constrained Dyck paths. In particular, we study Dyck paths of height at most $h$ with the additional restriction of having no $k-1$ consecutive valleys at height $h-1$. We give a combinatorial description of this class of paths and derive enumeration formulas using classical techniques for counting constrained lattice paths. As a consequence of this analysis, we obtain an identity involving Catalan numbers which, to the best of the authors' knowledge, does not appear in the existing literature. This identity arises naturally from the combinatorial interpretation and provides a new relation among families of Dyck paths with height and local structural constraints.
0
0
cs.DM 2026-05-05

Hybrid solver finishes hard 10x10 Latin squares in 5100s

Improving SAT Solvers on Orthogonal Latin Square Problems

Unaugmented CaDiCaL times out after seven days while the version calling the Euler-Parker algorithm succeeds on the toughest cases.

Figure from the paper full image
abstract click to expand
Latin squares are $n\times n$ matrices containing $n$ symbols, where each symbol appears exactly once in each row and column. They were studied by Euler, later popularized through Sudoku, and remain a rich source of difficult combinatorial search problems. Two Latin squares are orthogonal mates if, when overlaid, no ordered pair of symbols repeats. Pairs of orthogonal Latin squares exist for every order except 2 and 6, but finding orthogonal Latin squares computationally can be challenging. Satisfiability (SAT) solvers are strong at combinatorial search and have been used to resolve a number of various kinds of orthogonal Latin square problems. On the other hand, SAT solvers lack domain knowledge about Latin squares, such as the Euler-Parker algorithm for orthogonal mate construction. In this paper, we propose a hybrid method combining a SAT solver with the Euler-Parker algorithm (implemented using a Diophantine system solver) and show that the resulting solver is effective at finding certain kinds of orthogonal Latin squares. For example, certain pairs of $10\times10$ orthogonal Latin squares whose existence was unknown for over 25 years were recently found by Bright, Keita, and Stevens using a SAT solver. The hardest cases could not be solved by the SAT solver CaDiCaL within seven days, but CaDiCaL augmented with an external Euler-Parker algorithm solves these cases in a median of around 5,100 seconds.
0
0
cs.DM 2026-05-04

Complete triplet and duet data reconstructs arboreal networks

Efficient Reconstruction of Arboreal Networks

The encoding supports a polynomial-time algorithm and a natural distance measure between such networks.

Figure from the paper full image
abstract click to expand
Arboreal networks are multi-rooted phylogenetic networks whose underlying graph is a tree. We give an encoding of stack-free arboreal networks in terms of triplets and the novel concept of a duet. This yields a polynomial time algorithm to construct these networks from complete triplet and duet systems. The classification results show correctness and lead to a natural metric on these multi-rooted networks.
0
0
cs.DM 2026-05-01

Two-graph model splits feasibility from movement in path discovery

Separating Feasibility and Movement in Solution Discovery: The Case of Path Discovery

The separation captures directed moves and costs while older models become special cases, producing both easy and hard instances.

abstract click to expand
We study solution discovery, where the goal is to obtain a feasible solution to a problem from an initial configuration by a bounded sequence of local moves. In many applications, however, the graph that defines which vertex sets are feasible is not the same as the graph that governs how tokens, agents, or resources may move. Existing models such as token sliding and token jumping typically do not distinguish the problem graph and the movement graph. Motivated by this mismatch, we introduce a directed weighted two-graph model that cleanly separates feasibility from movement. A problem graph specifies the desired combinatorial objects, while a movement graph specifies admissible relocations and their costs. This yields a flexible framework that captures asymmetry, heterogeneous movement constraints, and weighted transitions, while subsuming classical discovery models as special cases. We investigate this model through \textsc{Path Discovery} and \textsc{Shortest Path Discovery}, where the task is to realize a vertex set containing an $s$-$t$-path or a shortest $s$-$t$-path in the problem graph. These problems are particularly natural in applications, since directed and weighted shortest paths are among the most fundamental algorithmic primitives. At the same time, previous work has already shown that discovery can be computationally hard even when the underlying optimization problem is easy. Our results show that this phenomenon persists, and becomes especially rich, in the two-graph setting. We obtain a detailed complexity picture, identifying tractable cases as well as strong hardness results.
0
0
cs.DM 2026-04-30

Network design for potential flows solves via shortest paths

Approximating the Network Design Problem for Potential-Based Flows

Reductions to classical problems enable exact and approximate solutions for energy networks with matching hardness results.

abstract click to expand
We develop efficient algorithms for a fundamental network design problem arising in potential-based flow models, which are central to many energy transport networks (e.g., hydrogen and electricity). In contrast to classical network flow problems, the nonlinearities inherent in potential-based networks introduce significant new challenges. We address these challenges through intricate reductions to classical combinatorial optimization problems, such as (constrained) shortest path problems, enabling the application of well-established algorithmic techniques to compute exact and approximate solutions efficiently. Finally, we complement these algorithmic results with matching complexity results concerning the hardness and non-approximability of the considered problem variants.
0
0
cs.DM 2026-04-24

Rocq proof strengthens Sands-Sauer-Woodrow theorem

A formal proof of the Sands-Sauer-Woodrow theorem using the Rocq prover and mathcomp/ssreflect

Weaker condition on asymmetric transitive closures suffices for monochromatic-reachable independent set

abstract click to expand
We present a formal proof of the Sands-Sauer-Woodrow (SSW) theorem using the Rocq proof assistant and the MathComp/SSReflect library. The SSW theorem states that in a directed graph whose edges are colored with two colors and that contains no monochromatic infinite outward path, there exists an independent set S of vertices such that every vertex outside S can reach S by a monochromatic path. We formalize the graph using two binary relations Eb and Er , representing the blue and red edges respectively, and we develop a dedicated library for binary relations represented as classical sets. Beyond formalizing the original SSW theorem, we establish a strictly stronger version in which the assumption ''no monochromatic infinite outward path'' is replaced by the weaker condition that the asymmetric parts of the transitive closures of Eb and Er admit no infinite outward paths. The original SSW theorem is then recovered as a corollary via a lemma showing that an infinite path for the asymmetric part of the transitive closure of a relation implies an infinite path for the relation.
0
0
cs.DM 2026-04-23

Rainbow matching is poly-time if most color classes are multipartite

A Complexity Dichotomy for Generalized Rainbow Matchings Based on Color Classes

The problem is NP-hard otherwise, with hardness holding even when every color class is one of three small forbidden graphs on four vertices.

Figure from the paper full image
abstract click to expand
Given an edge-colored graph, the Maximum Rainbow Matching problem asks for a maximum-cardinality matching of the graph that contains at most one edge from each color. We provide the following complexity dichotomy for this problem based on the structure of the color classes: Maximum Rainbow Matching admits a polynomial-time algorithm if almost every color class is a complete multipartite graph and it is NP-hard otherwise. To prove the NP-hardness-part of the dichotomy, we first show that the problem remains NP-hard even if every color class is a subgraph on four vertices that is either a matching of size two, a path on four vertices or a paw. We then leverage this result to all color classes that are not complete multipartite graphs. For this purpose, we introduce color-closed graph classes, which seem to be an appropriate notion for obtaining complexity classifications for rainbow problems and may be of independent interest. To prove the positive part of the dichotomy, we show that the problem essentially reduces to computing a maximum $(l, u)$-matching, where we heavily exploit that almost all color classes are complete multipartite graphs. In the case where all color classes are complete multipartite, we provide a polynomial-time algorithm that computes a maximum matching containing at most $m_i$ edges from each color class $i$.
0
0
cs.DM 2026-04-22

Completely independent Steiner trees ensure dual-disjoint paths for terminals

Completely Independent Steiner Trees

The condition demands internally vertex- and edge-disjoint paths between every terminal pair, supporting new bounds and algorithms on planar

abstract click to expand
Spanning trees are fundamental for efficient communication in networks. For fault-tolerant communication, it is desirable to have multiple spanning trees to ensure resilience against failures of nodes and edges. To this end, various notions of disjoint or independent spanning trees have been studied, including edge-disjoint, node/edge-independent, and completely independent spanning trees. Alongside these, several Steiner variants have also been investigated, where the trees are required to span a designated subset of vertices called terminals. For instance, the study of edge-disjoint spanning trees has been extended to edge-disjoint Steiner trees; a stronger variant is the problem of internally disjoint Steiner trees, where any two Steiner trees intersect exactly in the terminals. In this paper, we investigate the Steiner analogue of completely independent spanning trees, which we call \emph{completely independent Steiner trees}. A set of Steiner trees is completely independent if, for every pair of terminals $u,v$, the $(u,v)$-paths in all the Steiner trees are internally vertex-disjoint and edge-disjoint. This notion generalizes both completely independent spanning trees and internally disjoint Steiner trees. We provide a systematic study of completely independent Steiner trees from structural, algorithmic, and complexity-theoretic perspectives. In particular, we present several characterisations, connectivity bounds, algorithms, hardness results, and applications to special graph classes such as planar graphs and graphs of bounded treewidth. Along the way, we also introduce a directed variant of completely independent spanning trees via an equivalence with completely independent Steiner trees.
0
0
cs.DM 2026-04-20

Rooted metric polytope volume greatly exceeds elliptope on complete graphs

On the volume of the elliptope and related metric polytopes

Comparisons show the metric polytope is tighter than the elliptope only for small n, with the ordering reversing at large n

Figure from the paper full image
abstract click to expand
In this paper, we investigate the relationships between the volumes of four convex bodies: the cut polytope, metric polytope, rooted metric polytope, and elliptope, defined on graphs with $n$ vertices. The cut polytope is contained in each of the other three, which, for optimization purposes, provide polynomial-time relaxations. It is therefore of interest to see how tight these relaxations are. Worst-case ratio bounds are well known, but these are limited to objective functions with non-negative coefficients. Volume ratios, pioneered by Jon Lee with several co-authors, give global bounds and are the subject of this paper. For the rooted metric polytope over the complete graph, we show that its volume is much greater than that of the elliptope. For the metric polytope, for small values of $n$, we show that its volume is smaller than that of the elliptope; however, for large values, volume estimates suggest the converse is true. We also give exact formulae for the volume of the cut polytope for some families of sparse graphs.
0
0
cs.DM 2026-04-20

Halfspace separation solvable in polynomial time for three graph classes

Halfspace separation in geodesic convexity

NP-complete on general graphs, the decision problem becomes efficient on weakly bridged graphs, pseudo-modular graphs, and matroid basis

Figure from the paper full image
abstract click to expand
Let $G = V, E$ be a simple connected undirected graph. A set $X \subseteq V$ is \emph{geodesically convex} if for any pair of vertices $x, y \in X$, all vertices on all shortest paths in $G$ from $x$ to $y$ are contained in $X$. A set $H \subseteq V$ is said to be a {halfspace} if both $H$ and its complement (denoted by $H^c$) are convex. Given two sets $A, B \subseteq V$, the { halfspace separation} problem asks if there exist complementary halfspaces $H, H^c$ such that $A \subseteq H$ and $B \subseteq H^c$. The halfspace separation problem is known to be NP-complete for the geodesic convexity of general graphs. We show that geodesic halfspace separation is polynomial for weakly bridged graphs, pseudo-modular graphs, and the basis graphs of matroids.
0
0
cs.DM 2026-04-14

Exact closeness formulas derived for middle graphs under vertex failures

Analyzing Network Robustness via Residual Closeness

Derivations on special graph classes produce bounds and relations to line graphs for analyzing network robustness.

abstract click to expand
Networks are inherently vulnerable to vertex failures, making the analysis of their structural robustness a fundamental problem in graph theory. In this study, we investigate the closeness and vertex residual closeness of graphs, with a particular focus on the middle graph representations of certain special graph classes, which provide a richer structural framework for analysis. We derive exact expressions for the closeness values of these middle graphs and determine their residual closeness under vertex failures. By utilizing results obtained from specific graph families, we establish several general bounds for broader graph classes. Furthermore, by exploiting the relationship between the closeness of a graph, its line graphs, and middle graphs, we obtain new results that relate these three structures. In addition, we propose an algorithm for computing closeness in middle graphs and provide a detailed analysis of its performance.
0
0
cs.DM 2026-04-13

OPI admits solutions better than DQI semicircle law

On Worst-Case Optimal Polynomial Intersection

Reduction to secret sharing leakage resilience proves improvements above n/m=0.62 and perfect solutions above 0.75 for half-size lists.

Figure from the paper full image
abstract click to expand
The Optimal Polynomial Intersection (OPI) problem is the following: Given sets $S_1, \ldots, S_m \subseteq \mathbb{F}$ and evaluation points $a_1, \ldots, a_m \in \mathbb{F}$, find a polynomial $Q \in \mathbb{F}[x]$ of degree less than $n$ so that $Q(a_i) \in S_i$ for as many $i \in \{1, 2, \ldots, m\}$ as possible. Decoded Quantum Interferometry (DQI) is a quantum algorithm that efficiently returns good solutions to the problem, even on worst-case instances (Jordan et. al., 2025). The quality of the solutions returned follows a semicircle law, which outperforms known efficient classical algorithms. But does DQI obtain the best possible solutions? That is, are there solutions better than the semicircle law for worst-case OPI instances? Surprisingly, before this work, the best existential results coincide with (and follow from) the best algorithmic results. In this work, we show that there are better solutions for worst-case OPI instances over prime fields. In particular, DQI and the semicircle law are not optimal. For example, when the lists $S_i$ have size $\rho p$ for $\rho \sim 1/2$, our results imply the existence of a solution that asymptotically beats the semicircle law whenever $n/m \geq 0.6225$, and we show that an asymptotically perfect solution exists whenever $n/m \geq 0.7496$. Our results generalize to Max-LINSAT problems derived from any Maximum Distance Separable (MDS) code, and to any $\rho \in (0,1)$. The key insight to our improvement is a connection to local leakage resilience of secret sharing schemes. Along the way, we recover several re-proofs of the existence of solutions achieving the semicircle law.
1 0
0
cs.DM 2026-04-06 Recognition

Census dual graphs are nearly planar and triangulated

Census Dual Graphs: Properties and Random Graph Models

Perturbed grids and Delaunay models of random points best match real US county and tract adjacency graphs for districting studies.

Figure from the paper full image
abstract click to expand
In the computational study of political redistricting, feasibility necessitates the use of a discretization of regions such as states, counties, and towns. In nearly all cases, researchers use a dual graph, whose vertices represent small geographic units (such as census blocks or voting precincts) with edges for geographic adjacency. A political districting plan is a partition of this graph into connected subgraphs that satisfy certain additional properties, such as connectedness, compactness, and equal population. Though dual graphs underlie nearly all computational studies of political redistricting, little is known about their properties. This is a unique graph class that has been described colloquially as `nearly planar, nearly triangulated,' but thus far there has been a lack of evidence to support this description. In this paper we study dual graphs for counties, census tracts, and census block groups across the United States in order to understand and characterize this graph class. We also consider several random graph models (most based on randomly perturbing grids or Delauney triangulations of random point sets), and determine which most closely resemble dual graphs under key metrics. This work lays an initial foundation for understanding and modeling the properties of dual graphs; this will provide invaluable insight to researchers developing algorithms using them to understand, assess, and quantify the properties of political districting plans.
0
0
cs.DM 2026-04-06 Recognition

Triplet encoding reproduces Most Permissive Boolean dynamics

A Boolean encoding of the Most Permissive semantics for Boolean networks

Each component becomes three variables whose asynchronous updates match the original network's attainable states exactly.

Figure from the paper full image
abstract click to expand
Boolean networks are widely used to model biological regulatory networks and study their dynamics. Classical semantics, such as the asynchronous semantics, do not always accurately capture transient or asymptotic behaviors observed in quantitative models. To address this limitation, the Most Permissive semantics was introduced by Paulev\'e et al., extending Boolean dynamics with intermediate activity levels that allow components to transiently activate or inhibit their targets during transitions. In this work, we provide a Boolean encoding of the Most Permissive semantics: each component of the original network is represented by a triplet of Boolean variables, and we derive the extended logical function governing the resulting network. We prove that the asynchronous dynamics of the encoded network exactly reproduces the attainability properties of the original network under Most Permissive semantics. This encoding is implemented as a modifier within the bioLQM framework, making it directly compatible with existing tools such as GINsim. To address scalability limitations, we further extend the tool to support partial unfolding, restricted to a user-defined subset of components.
0
0
cs.DM 2026-04-06 Recognition

Treewidth-t graphs admit O(t log t) proper ball compression

Sample compression schemes for balls in structurally sparse graphs

Ball hypergraphs in graphs of bounded treewidth or cliquewidth have proper sample compression of size O(t log t), tight up to log factor.

abstract click to expand
Sample compression schemes were defined by Littlestone and Warmuth (1986) as an abstraction of the structure underlying many learning algorithms. In a sample compression scheme, we are given a large sample of vertices of a fixed hypergraph with labels indicating the containment in some hyperedge. The task is to compress the sample in such a way that we can retrieve the labels of the original sample. The size of a sample compression scheme is the amount of information that is kept in the compression. Every hypergraph with a sample compression scheme of bounded size must have bounded VC-dimension. Conversely, Moran and Yehudayoff (J. ACM, 2016) showed that every hypergraph of bounded VC-dimension admits a sample compression scheme of bounded size. We study a specific class of hypergraphs emerging from balls in graphs. The schemes that we construct (contrary to the ones constructed by Moran and Yehudayoff) are \textit{proper}, meaning that we retrieve not only the labeling of the original sample but also a hyperedge (ball) consistent with the original labeling. First, we prove that for every graph $G$ of treewidth at most $t$, the hypergraph of balls in $G$ has a proper sample compression scheme of size $\mathcal{O}(t\log t)$; this is tight up to the logarithmic factor and improves the quadratic (improper) bound that follows from the result of Moran and Yehudayoff. Second, we prove an analogous result for graphs of cliquewidth at most $t$.
0
0
cs.DM 2026-04-03 Recognition

Exact Matching on bipartite graphs solved in O(n^6) time

Bipartite Exact Matching in P

Affine-slice nonvanishing theorem on weighted determinants replaces randomization with deterministic checks on brace blocks

Figure from the paper full image
abstract click to expand
The Exact Matching problem asks whether a bipartite graph with edges colored red and blue admits a perfect matching with exactly $t$ red edges. Introduced by Papadimitriou and Yannakakis in 1982, the problem has resisted deterministic polynomial-time algorithms for over four decades, despite admitting a randomized solution via the Schwartz-Zippel lemma since 1987. We establish the Affine-Slice Nonvanishing Theorem (ASNC) for all bipartite braces: a Vandermonde-weighted determinant polynomial is nonzero whenever the exact-$t$ fiber is nonempty. This yields a deterministic $O(n^6)$ algorithm for Exact Matching on all bipartite graphs via the tight-cut decomposition into brace blocks. The proof proceeds by structural induction on McCuaig's brace decomposition. We handle the McCuaig exceptional families via a parity-resolved cylindric-network positivity argument, the replacement determinant algebra, and the narrow-extension cases (KA, $J3 \to D1$). For the superfluous-edge step, we introduce two closure tools: a matching-induced Two-extra Hall theorem that resolves the rank-$(m-2)$ branch via projective-collapse contradiction, and a distinguished-state $q$-circuit lemma that eliminates the rank-$(m-1)$ branch entirely by showing that any minimal dependent set containing the superfluous state forces rank $m-2$. A Lean 4 formalization accompanies the paper. The formalization reduces the main theorem to eight explicit hypotheses corresponding to results proved here and in McCuaig (2001), with all algebraic tools, the induction skeleton, and the combinatorial infrastructure fully machine-checked.
1 0
0
cs.DM 2026-04-02 2 theorems

Banach density hits 1/2 only for finite-rank language spaces in 1D

Banach density of generated languages: Dichotomies in topology and dimension

Infinite Cantor-Bendixson rank forces generated sets to contain arbitrarily large sparse gaps, unlike asymptotic density, and higher dimsadd

Figure from the paper full image
abstract click to expand
The formalism of language generation in the limit studies generative models by requiring an algorithm, given strings from a hidden true language, to eventually generate new valid strings. A core issue is the tension between validity and breadth. Prior work quantified breadth via asymptotic density, where the priority is generating strings early in a natural countable ordering. Here, we study density when the strings are embedded in $d$ dimensions, a ubiquitous structure in current generative models. Our goal is for the generated strings to be dense throughout the embedding. This requires a different measure, the Banach density, which captures whether a set contains large sparse regions. Using Banach density uncovers a rich structure based on dimension and the topology of the language collection. We prove that in dimension one, when the underlying topological space has finite Cantor-Bendixson rank, an algorithm can always generate a subset of the true language with an optimal lower Banach density of 1/2. However, for collections with infinite Cantor-Bendixson rank, there are cases where no algorithm can achieve any positive lower Banach density; the generated set must contain arbitrarily large, sparse regions. This reveals a topological contrast unseen with asymptotic density, where 1/2 is always achievable. We also extend our results to a family of measures interpolating between Banach and asymptotic density. Finally, in dimension $d \geq 2$, our positive result for Banach density encounters a Ramsey-theoretic obstacle regarding two-colored point sets. Overcoming this requires a nondegeneracy condition: the embedding of the true language must be sufficiently represented throughout the full $d$-dimensional space.
1 0

browse all of cs.DM β†’ full archive Β· search Β· sub-categories