pith. machine review for the scientific record. sign in

cs.LO

Logic in Computer Science

Covers all aspects of logic in computer science, including finite model theory, logics of programs, modal logic, and program verification. Programming language semantics should have Programming Languages as the primary subject area. Roughly includes material in ACM Subject Classes D.2.4, F.3.1, F.4.0, F.4.1, and F.4.2; some material in F.4.3 (formal languages) may also be appropriate here, although Computational Complexity is typically the more appropriate subject area.

0
cs.LO 2026-05-13 2 theorems

Tool synthesizes reactive systems over infinite integers

sweap: Reactive Synthesis for Infinite-State Integer Problems

Sweap refines finite-state abstractions to handle integer variables and outperforms the prior tool on benchmarks.

Figure from the paper full image
abstract click to expand
Recent years have seen a significant increase in the interest in reactive synthesis from specifications that relate to infinite state spaces. We present sweap, a tool for synthesis of infinite-state Linear Integer Arithmetic reactive systems. sweap implements a CEGAR approach, relying on state-of-the-art finite-state synthesis tools as black boxes to solve abstract synthesis problems. sweap supports most common input formalisms for infinite-state reactive-synthesis problems: Temporal Stream Logic Modulo Theories, Reactive Program Games, the bespoke input of the ISSY tool, and our own bespoke input. We present a mature version of sweap with novel features: a dual abstraction approach that improves its capabilities in proving unrealisability, support for nondeterministic and unbounded updates, more general initialization of variables, and equirealisable reductions for optimisation. Experimental evaluation shows that sweap outperforms its only competitor in this domain.
0
0
cs.LO 2026-05-13 2 theorems

New method computes conditional probabilities in MDPs faster

Fast Computation of Conditional Probabilities in MDPs and Markov Chain Families

It avoids cyclic reductions, runs linearly on acyclic cases, and processes millions of chains together for monitoring and program analysis.

Figure from the paper full image
abstract click to expand
Computing optimal conditional reachability probabilities in Markov decision processes (MDPs) is tractable by a reduction to reachability probabilities. Yet, this reduction yields cyclic, challenging MDPs that are often notoriously hard to solve. We present an alternative, practically efficient method to compute optimal conditional reachabilities. This new method is numerically stable, can decide the threshold problem in linear time on acyclic MDPs, and yields performance comparable to standard reachability queries. We also integrate the method in an abstraction-refinement framework to analyse millions of Markov chains at once. We demonstrate the efficacy of the new methods on benchmarks from Bayesian network analysis, probabilistic programs, and runtime monitoring and show speed-ups up to multiple orders of magnitude.
0
0
cs.LO 2026-05-13 1 theorem

FO2 groundings can force exponential DNNF size

On Knowledge Compilation For Two-Variable First-Order Logic

A symmetry-aware compiler branches on types and caches residuals to compile many cases efficiently despite the lower bound.

abstract click to expand
Knowledge compilation transforms logical theories into circuit representations that support efficient reasoning. We study this problem for propositional groundings of FO2, the two-variable fragment of first-order logic over finite domains. Given an FO2 sentence and a domain of size n, its grounding yields a propositional theory over ground atoms. We ask whether such theories admit compact representations in DNNF-based and related knowledge compilation languages, and whether these can be constructed efficiently, both with respect to the domain size n for a fixed sentence. We show first that compact compilation is impossible in general: there exists an FO2 sentence whose grounding over a domain of size n requires DNNF size $2^{\Omega(n)}$. On the positive side, we develop a two-stage compiler that exploits the symmetries inherent in the propositional groundings of FO2 sentences. It branches on unary and binary types rather than individual ground atoms, in a similar spirit to lifted inferences for probabilistic relational models. Moreover, it optimizes the compilation process by efficiently identifying and caching residual subproblems that are equivalent with respect to future extensions. Experiments show the practical efficiency of our approach, which often produces smaller circuits and compiles faster than straightforward grounding-based baselines.
0
0
cs.LO 2026-05-12 Recognition

Shields enforce probabilistic safety bounds in MDPs

Shields to Guarantee Probabilistic Safety in MDPs

Offline and online constructions keep unsafe-event probability below a threshold even though full permissiveness must be sacrificed.

Figure from the paper full image
abstract click to expand
Shielding is a prominent model-based technique to ensure safety of autonomous agents. Classical shielding aims to ensure that nothing bad ever happens and comes with strong guarantees about safety and maximal permissiveness. However, shielding systems for probabilistic safety, where something bad is allowed to happen with an acceptable probability, has proven to be more intricate. This paper presents a formal framework that conservatively extends classical shields to probabilistic safety. In this framework, we (i) demonstrate the impossibility of preserving the strong guarantees on safety and permissiveness, (ii) provide natural shields with weaker guarantees, and (iii) introduce offline and online shield constructions ensuring strong safety guarantees. The empirical evaluation highlights the practical advantages of the new shields, as well as their computational feasibility.
0
0
cs.LO 2026-05-12 2 theorems

FOMOD logic testable in constant time on finitary graphs

Constant time testability of first-order logic with modulo counting on finitary graphs

Number-theoretic patchability on Hanf normal forms lets constant-size samples decide global properties when degree and component size are-b

Figure from the paper full image
abstract click to expand
This paper studies algorithmic meta theorems for property testing with \emph{constant running time} in the bounded degree model. In (Adler, Harwath 2018) it was shown that on graph classes $\mathcal C^{w}_d$ consisting of all graphs with both degree at most $d$ and treewidth at most $w$, every problem expressible in monadic second-order logic with counting (CMSO) is testable with \emph{polylogarithmic} running time (where $d,w\in \mathbb N$ are fixed). It was left open whether this can be improved to \emph{constant} running time. In this paper we give a positive answer for testing CMSO on classes $\mathcal C^{c}_d$, where $d$ bounds the degree and $c$ bounds the component size. Our main result shows constant time testability of first-order logic with modulo counting (FOMOD) on $\mathcal C^{c}_d$. For our proof we tailor Hanf normal form of FOMOD to our setting, and we exhibit a number-theoretic `patchability' condition that allows to infer global information on the input graph from a local sample of constant size. We believe that our `patchability' might be of independent interest. The step from FOMOD to CMSO then follows from a result by (Eickmeyer, Elberfeld, Harwath, 2017) on the expressive power of order invariant monadic second-order logic on classes of bounded treedepth.
0
0
cs.LO 2026-05-12 2 theorems

Preservation theorems hold for all lattice semirings

Preservation Theorems in Semiring Semantics

Łoś-Tarski and homomorphism results carry over to first-order logic on lattice semirings used in database provenance.

abstract click to expand
We study the status of preservation theorems such as the {\L}o\'s-Tarski theorem and the homomorphism preservation theorem in the context of semiring semantics. Semiring semantics has its origins in the provenance analysis of database queries. Depending on the underlying semiring, it allows us to track which atomic facts are responsible for the truth of a statement or practical information about the evaluation such as costs or confidence. The systematic development of semiring semantics for first-order logic and other logical systems raises the question to what extent classical model-theoretic results can be generalised to this setting and how such results depend on the underlying semiring. The definitions of semantic properties such as preservation under extensions, substructures, or homomorphisms naturally generalise to the setting of semiring semantics. However, the status of the corresponding preservation theorem strongly depends on the algebraic properties of the particular semirings. We prove that these preservation theorems do indeed hold for all lattice semirings (a quite large class, encompassing practically relevant semirings and in particular all min-max semirings). The proofs combine adaptations of the classical compactness and amalgamation methods with specific reduction methods for logical entailment that have been developed in semiring semantics. On the other side, variants of the existential preservation theorem fail for many other semirings, including the tropical semiring, the Viterbi semiring, the {\L}ukasiewicz semiring, and the natural semiring. Surprisingly, the existential preservation theorem does hold for finite interpretations in a number of semirings, including all lattice semirings. Thus, the situation for these semirings is in sharp contrast to the Boolean case, where the {\L}o\'s-Tarski theorem holds in general, but not in the finite.
0
0
cs.LO 2026-05-12 Recognition

This paper shows that reachability in RDMA programs is undecidable even in restricted…

On the Verification Problem of Remote Direct Memory Access programs (Extended Version with Appendix)

A normal form for violations reduces robustness checking to reachability in counter programs with tight complexity bounds.

Figure from the paper full image
abstract click to expand
Remote Direct Memory Access (RDMA) is a technology that allows direct memory access from the memory of one computer into that of another without involving either one's operating system. This enables high-throughput, low-latency networking, which is especially useful in massively parallel computer clusters. In this paper, we study the reachability and robustness problems for RDMA programs. We show that reachability is undecidable in general, even for a restricted fragment of the model. We then focus on robustness, which asks whether a program exhibits the same behaviours under the RDMA and sequential consistency (SC) semantics, and prove that this problem is decidable. Our central technical result establishes a normal form for robustness violations, showing that any non-robust program admits a violating execution of a specific form. We then leverage this normal form to obtain a decision procedure that reduces robustness to reachability in finite-state programs with counters, yielding an EXPSPACE upper bound in the general case, and a PSPACE upper bound in the absence of poll operations. Finally, we also show that both of these bounds are optimal.
0
0
cs.LO 2026-05-12 1 theorem

Neuro-symbolic system turns G-code collisions into bounding-box fixes

Correct-by-Construction G-Code Generation: A Neuro-Symbolic Approach via Separation Logic

Separation logic verifier feeds minimal spatial directives back to the neural generator for iterative self-correction in manufacturing paths

Figure from the paper full image
abstract click to expand
This paper proposes a neuro-symbolic framework for G-code generation by integrating the GLLM neural method (Abdelaal et al., 2025) with our established Separation Logic (SL) verifier. We introduce a two-component architecture where GLLM serves as a creative generator and the SL Prover, utilizing the Spatial Heap model, acts as a deterministic verifier. By defining physical collisions as logical Spatial Data Races - violations of the separating conjunction in SL - the framework translates proof failures into structured mathematical feedback. These failures are condensed into minimal bounding boxes that act as precise spatial directives for GLLM's iterative self-correction. This synergy establishes a self-correcting generative cycle that reduces the need for manual oversight, supporting the production of verified G-code to enhance safety in autonomous manufacturing.
0
0
cs.LO 2026-05-12 2 theorems

Neuro-symbolic loop fixes G-code via spatial proof failures

Correct-by-Construction G-Code Generation: A Neuro-Symbolic Approach via Separation Logic

A generator and separation-logic verifier trade minimal bounding boxes until tool paths are collision-free and formally verified.

Figure from the paper full image
abstract click to expand
This paper proposes a neuro-symbolic framework for G-code generation by integrating the GLLM neural method (Abdelaal et al., 2025) with our established Separation Logic (SL) verifier. We introduce a two-component architecture where GLLM serves as a creative generator and the SL Prover, utilizing the Spatial Heap model, acts as a deterministic verifier. By defining physical collisions as logical Spatial Data Races - violations of the separating conjunction in SL - the framework translates proof failures into structured mathematical feedback. These failures are condensed into minimal bounding boxes that act as precise spatial directives for GLLM's iterative self-correction. This synergy establishes a self-correcting generative cycle that reduces the need for manual oversight, supporting the production of verified G-code to enhance safety in autonomous manufacturing.
0
0
cs.LO 2026-05-12 2 theorems

Separation logic catches CNC collisions as spatial data races

Separation Logic for Verifying Physical Collisions of CNC Programs

Workspace modeled as heap lets formal triples prove tool-path disjointness without repeated geometric simulation.

Figure from the paper full image
abstract click to expand
Safety verification in Computer Numerical Control (CNC) machining has traditionally relied on simulation-based methods that require repetitive tests when requirements change. This paper introduces a formal verification framework that conceptualizes the physical CNC workspace as a Spatial Heap, treating physical occupancy as a managed logical resource. Central to our approach is a Parser-Prover Handshake that decouples machine kinematics from formal logic. By mapping tool trajectories and safety buffers into a discrete spatial model prior to evaluation, the framework enables the use of Separation Logic (SL) to verify safety via formal triples. Within this model, physical collisions are redefined as logical Spatial Data Races, detected through the failure of the separating conjunction to establish disjointness. Furthermore, we extend the methodology to collaborative environments using Concurrent Separation Logic (CSL), where physical hand-offs are verified as formal ownership transfers. This approach provides a scalable, mathematically grounded alternative to geometric simulation, offering a foundation for autonomous, zero-collision manufacturing.
1 0
0
cs.LO 2026-05-12 2 theorems

Refinement calculus certifies index reductions for DAEs

A Deductive Refinement Calculus for Differential-Algebraic Programs

Differential-algebraic programs can be simplified step by step with syntactic proofs that each reduction preserves trajectories.

abstract click to expand
This paper presents differential-algebraic refinement logic (dARL) with which one can deductively verify both properties and relations of differential-algebraic programs (DAPs) that extend hybrid dynamical systems with differential-algebraic equations (DAEs). A refinement calculus is introduced that enables the sound comparison of trajectories of differential-algebraic equations, crucially utilizing a novel trace-based semantics. This enables the incremental verification/simplification of complicated DAEs, while ensuring correctness at each step by the soundness of the calculus. The calculus is shown to be complete for certifying index reductions of DAEs, providing trustworthy syntactic proofs of correctness at each step of the reduction.
0
0
cs.LO 2026-05-12 1 theorem

Every prevision is the infimum of sublinear previsions

Just Previsions

The same holds as a supremum of superlinear ones under mild conditions, yielding homeomorphisms to hyperspaces via orthogonality.

abstract click to expand
Previsions are positively homogeneous functionals, and are generalized forms of integration functionals. We investigate previsions -- just previsions, not sublinear or superlinear previsions as in previous work. We show that every prevision can be expressed as an infimum of sublinear previsions, and as a supremum of superlinear previsions under mild conditions. This extends to homeomorphisms between spaces of previsions and certain hyperspaces over spaces of sublinear or superlinear previsions, which can also be characterized in terms of orthogonality relations, making the construction a variant of a double powerspace construction.
0
0
cs.LO 2026-05-11 Recognition

Crumbling machine reverses CbV steps with constant space overhead

A Reversible Crumbling Abstract Machine for Plotkin's Call-by-Value

One little-studied abstract machine for Plotkin's calculus adds fixed extra data per beta reduction to allow full reversal.

Figure from the paper full image
abstract click to expand
Landauer's embeddings enable the reversibility of computations for non-reversible programming languages, augmenting each intermediate state with enough data to reconstruct the previous state. An interesting research question is therefore to try to reduce the space overhead required. In this work we propose a Landauer's embedding for Plotkin's call-by-value calculus (CbV). In order to control the computational complexity of CbV and turn the number of $\beta$-steps into a cost model, CbV is typically implemented via reduction machines. We show that one machine, that has not received much attention, exhibits a particularly compact Landauer's embedding, requiring only constant space overhead for each step.
0
0
cs.LO 2026-05-11 2 theorems

Two-variable logic on data words decidable for specific monoids

Set Automata and Limits of Decidability of Two-Variable Logic on Data Words

The guarded-predicate extension remains decidable exactly when the monoid is idempotent and its two-sided ideals are linearly ordered.

abstract click to expand
We extend the two-variable logic on data words with guarded regular binary predicates of the form $\widetilde{L}(x,y)$ that is true if positions $x$ and $y$ are in the same class and the factor strictly between $x$ and $y$ is in the regular language $L$. We characterise the class of monoids for which the extension of the two-variable logic with guarded predicates recognised by the monoid is decidable, namely the class of idempotent monoids whose two-sided ideals are linearly ordered. For this, we introduce an automata formalism, set automata, that is equivalent to the class automata of Boja\'nczyk and Lasota and thus has an undecidable emptiness problem. We identify a subclass of set automata called ordered quasi-normal set automata that has a decidable emptiness problem by reduction to the emptiness problem of ordered multicounter automata. We show that the two-variable logic extended with guarded regular predicates recognised by a semigroup $S$ is expressively equivalent to a quasi-normal set automaton with the semigroup of transformations $S$. In particular, if $S$ is a linear band monoid then the resulting automaton is ordered, and the decidability result follows.
0
0
cs.LO 2026-05-11 2 theorems

Agda code makes well-scoped nameless syntax adequate for binders

Well-Scoped Locally Nameless Representation of Syntax

Generic parameterization by Plotkin signatures comes with a proof of adequacy to nameful syntax modulo alpha-equivalence.

Figure from the paper full image
abstract click to expand
When using interactive theorem provers based on dependent type theory to define and reason about languages involving binding constructs, we advocate the use of a well-scoped version of the locally nameless method of representing syntax. This paper describes generic code parameterized by a Plotkin-style binding signature for this style of syntax representation within the Agda theorem prover, gives a proof of its adequacy with respect to naive nameful syntax modulo alpha-conversion and discusses some examples of its use.
0
0
cs.LO 2026-05-11 Recognition

Inverter moves before mapping trim circuit delays

Inverter Redistribution through Self-Dual and Self-Anti-Dual Function Transformation

Self-dual transforms on And-Inverter Graphs yield average 0.49% delay cuts and up to 3.86% on square-root logic

Figure from the paper full image
abstract click to expand
And-Inverter Graph (AIG)-based logic synthesis has been a cornerstone of digital design automation for several decades. While numerous optimization techniques have been developed for both technology-independent and technology-dependent synthesis stages, existing technology mapping approaches predominantly employ graph-covering strategies directly on AIG representations without adequately addressing complemented edge distribution. Neglecting inverters creates a significant disconnect: complemented edges are systematically overlooked in technology-independent cost functions, yet they abruptly become critical during technology-dependent mapping. In this work, we introduce a delay-driven pre-processing stage that operates prior to technology mapping, designed to strategically redistribute complemented edges and mitigate the inverter-induced costs on critical paths. Experimental validation demonstrates that our delay-targeted methodology not only preserves original delay characteristics but also enables performance improvements. Notably, arithmetic logic in the EPFL combinational benchmark exhibits particular sensitivity to this approach, with our method achieving an average delay reduction of 0.49% and a maximum improvement of 3.86% on the case sqrt.
0
0
cs.LO 2026-05-11 Recognition

Sheaf condition certifies global design consistency from pairwise checks

Sheaves as a Means of Maintaining Consistency in Model-based Systems Engineering

Compatible local designs on engineering interfaces glue to a unique global system, with the equivalence machine-verified in Lean.

abstract click to expand
We propose that the sheaf condition on a presheaf of design spaces provides a mathematical model for multi-view consistency in the architecture of cyber-physical systems (CPS). In model-based systems engineering, multiple engineering views -- electrical, thermal, mechanical, and software -- must be kept mutually consistent, yet current practice relies on informal procedures without a precise semantic account of global consistency. We construct an architectural site: a topological space whose points are pairwise interfaces between engineering domains and whose open sets represent engineering views. A design presheaf assigns to each view its local design space and to each inclusion the corresponding restriction map. We show that the sheaf condition on this presheaf is equivalent to compatibility on pairwise overlaps, yielding a local criterion for global multi-view consistency. The equivalence and a concrete three-view worked example are machine-verified in Lean 4 using Mathlib. The formalization establishes that the design presheaf is a sheaf, that the sheaf condition is equivalent to pairwise overlap compatibility, and that compatible local design families glue to unique global designs. Global consistency of an arbitrary number of views can be certified by checking only pairwise interface compatibility; compatible local designs determine a unique global design; derived properties computed by limit-preserving functors inherit the same consistency guarantee; and the entire verification chain admits machine-checkable proofs in Lean.
0
0
cs.LO 2026-05-11 Recognition

DiLL models arise as two fibrations with a tangent functor

A Fibrational Perspective on Differential Linear Logic

Adapting type-theory techniques to linear-non-linear adjunctions organizes differentiation and formula symmetries.

abstract click to expand
Differential Linear Logic (DiLL) is a sequent calculus that expresses differentiation via symmetries between linear and non-linear formulas. In this paper, we express categorical models of DiLL as a pair of Grothendieck fibrations equipped with a tangent functor. To do so, we adapt methods from categorical semantics of type theory to linear-non-linear adjunctions. This is a first step towards unifying DILL and dependent types.
0
0
cs.LO 2026-05-11 2 theorems

New temporal logic characterizes encoder-decoder transformers

Cross-Attention and Encoder-Decoder Transformers: A Logical Characterization

A counting modality over the encoder and past modality over the decoder exactly capture cross-attention behavior even with masking changes.

Figure from the paper full image
abstract click to expand
We give a novel logical characterization of encoder-decoder transformers, the foundational architecture for LLMs that also sees use in various settings that benefit from cross-attention. We study such transformers over text in the practical setting of floating-point numbers and soft-attention, characterizing them with a new temporal logic. This logic extends propositional logic with a counting global modality over the encoder input and a past modality over the decoder input. We also give an additional characterization of such transformers via a type of distributed automata, and show that our results are not limited to the specific choices in the architecture and can account for changes in, e.g., masking. Finally, we discuss encoder-decoder transformers in the autoregressive setting.
0
0
cs.LO 2026-05-11 Recognition

Finitary prebisimulations defined for truly concurrent processes

Finitary Truly Concurrent Bisimulations

The definitions restrict distinctions to finite observations so denotational models become fully abstract under prebisimulation preorder.

abstract click to expand
To develop a full abstract denotational model of a process language based on prebisimulation preorder, its behavioural semantics has two problems: (1) Two processes related by a standard denotational interpretation afford the same finite observations. (2) Prebisimulation can make distinctions between the behaviours of two processes based on infinite observations. So, finitary part of prebisimulation is needed to obtain full abstract results. There existed two main results on finitary bisimulation: the logical form and the behavioural form. Following the latter one, we give the definitions of truly concurrent prebisimulations and their finitary ones.
0
0
cs.LO 2026-05-11 2 theorems

Evidence tracking turns tape randomness into witnessed probabilistic logic

Evidence-Tracked Tape Semantics for Probabilistic Computation

Entailments between tape predicates are realized by uniform transformers and yield sound probability inequalities under a fixed measure.

abstract click to expand
A standard intensional account of probabilistic computation represents a randomized program as a deterministic computation that consumes an explicit random tape. This yields a two-layer perspective: an intensional layer that makes reuse of randomness and correlation visible, and an extensional layer obtained by interpreting tapes under a chosen probability measure. We develop an evidence-tracked tape semantics using the monadic-core-to-evidenced-frame pipeline (and its induced realizability tripos), obtaining a higher-order logic in which entailments are witnessed by uniform evidence transformers. Quantitative statements are recovered by interpretation: once a tape measure is fixed, probabilities and expectations arise by extracting numerical summaries from tape-indexed predicates, and entailments yield sound inequalities, with an almost-sure quotient supporting probability-one reasoning. We also study intensional principles that are lost at the level of laws, including proof-relevant transport along realizable tape-rewiring maps and a canonical splitting discipline for stream tapes enforcing independent draws. Finally, we relate tape-based reasoning to an extensional law semantics via pushforward, isolating a probability-one must abstraction as a sound summary of tape-based proofs.
0
0
cs.LO 2026-05-11 Recognition

LLMs cannot tell merge-ready Mathlib PRs from build-passing revisions

MathlibPR: Pull Request Merge-Readiness Benchmark for Formal Mathematical Libraries

New benchmark from real PR histories shows models and agents fail to predict which contributions actually get merged.

Figure from the paper full image
abstract click to expand
The ecosystem of Lean and Mathlib has become the de facto standard for large language model (LLM) assisted formal reasoning with remarkable successes in recent years. Those successes, however, only consume Mathlib as an essential dependency but do not directly contribute to it. In the meantime, the growth of Mathlib has recently been bottlenecked by the review process, which requires human reviewers to judge whether proposed pull requests (PRs) follow the Mathlib's conventions and are worth integrating as part of a shared mathematical infrastructure. This leads to our central question: can LLMs help review Mathlib PRs? To this end, we introduce MathlibPR, a benchmark built from real Mathlib4 PR histories. We further propose a staged evaluation protocol and use it to evaluate both LLM models (e.g., DeepSeek, Qwen, Goedel, and Kimina) and LLM agents (e.g., Codex and Claude Code). Surprisingly, both LLM models and LLM agents struggle to distinguish merge-ready PRs from build-passing PRs that were revised or never merged. By turning Mathlib PR histories into a supervised signal, MathlibPR provides a step toward reviewer assistants and reward models that could help evaluate PRs and steer LLMs toward producing merge-ready Mathlib contributions.
0
0
cs.LO 2026-05-08 Recognition

QUBO encoding finds short SAT implicants by leaving variables unassigned

Computing Short SAT Implicants via Ising/QUBO Encodings

Dual-polarity representation lets ground states represent don't cares and produce minimal partial solutions on 3-SAT benchmarks.

abstract click to expand
Many reasoning tasks require short partial satisfying assignments (implicants), sometimes focusing on a set of important variables. SAT-to-Ising-QUBO formulations are implicitly designed so that ground states correspond to total assignments, since the Ising/QUBO model assigns a value to every spin and has no native representation of unassigned variables. We introduce an Ising/QUBO framework that incorporates "don't-care" semantics into the quadratic model via a dual-polarity representation, enabling the retrieval of short implicants. The encoding supports implicant shrinking and projection through minor objective modifications. We provide parameter regimes under which ground states correspond to short partial satisfying assignments, achieving minimality and, when the quadratic penalty function permits, minimum-cardinality. We empirically evaluate the encoding with simulated annealing on random 3-SAT enumeration benchmarks and non-CNF formulas, showing that it leaves about one-third of variables unassigned on random 3-SAT formulas while preserving satisfiability, and that consecutive polarity-freezing rounds achieve minimality (and minimum-cardinality) with high probability.
0
0
cs.LO 2026-05-08

Relational dualities link bisimulations to predicate relations

Relational Dualities and Bisimulation

Extensions of Tarski and Thomason dualities create a correspondence that yields a proof system relating formulae across systems.

abstract click to expand
The Kripke semantics of various logics arises via categorical dualities between a category of relational frames and their maps, and a category of algebras and logical homomorphisms. When the relational frames are considered as computational systems (e.g. the states of a machine), the corresponding algebra is one of logical predicates on these systems (e.g. predicates on these states, i.e. program logics). Our aim is to extend this phenomenon to relations, putting well-behaved relations between systems (e.g. bisimulations) in correspondence with relations between predicates. This is achieved by constructing particular relational extensions of Tarski duality (for infinitary classical propositional logic) and Thomason duality (for infinitary classical modal logic). We sketch how these dualities give rise to a proof system that relates formulae between different systems.
0
0
cs.LO 2026-05-08

Graded coalgebras model continuous time transitions

Graded Monad Coalgebras for Continuous-Time Transition Systems

New structures extend discrete coalgebra theory to capture systems that evolve continuously, with semantics and modal logics.

abstract click to expand
Functor coalgebras capture a wide range of transition systems that must however evolve in discrete steps. We introduce graded coalgebras of graded monads and propose them to model continuous-time transition systems. We develop the theory of graded coalgebras-including graded distributive laws between graded monads-and we give conditions for the existence of terminal coalgebras. We define both branching-time and trace semantics, linking them to recent work on Feller-Dynkin processes. Finally, we develop coalgebraic modal logics for both process semantics and state criteria for invariance and expressivity.
0
0
cs.LO 2026-05-08

Symbolic models decide satisfiability in extended first-order fragments

Decidability Results for Fragments of First-Order Logic via a Symbolic Model Property

A generic construction shows that allowing restricted self-loops on one sort preserves the finite-model property needed for decidability.

abstract click to expand
Recently, symbolic structures were proposed as finite representations of potentially infinite first-order structures, where Linear Integer Arithmetic terms and formulas define the domain and interpretations of a structure. We generalize symbolic structures to use any base theory that admits a standard model. Symbolic structures induce a symbolic model property, which holds for a fragment of first-order logic if every satisfiable formula in the fragment has a symbolic model. The symbolic model property implies decidability, since the model-checking problem for symbolic structures is decidable. We use the symbolic model property to prove decidability for several fragments that extend the fragment of stratified formulas, relaxing the quantifier-alternation constraints by allowing one sort to have self-looping functions, under certain restrictions. To establish the symbolic model property for these fragments we construct a symbolic model for a formula from an arbitrary model. The construction and its correctness are proved in a generic fashion, which may be instantiated to other similarly restricted fragments.
0
0
cs.LO 2026-05-08

Gossip protocols fix their own faulty messages

Self-Correcting Gossip Protocols

Dynamic epistemic logic models autonomous correction of transmission errors without central oversight and shows the resulting change inOptim

Figure from the paper full image
abstract click to expand
We investigate self-correcting gossip protocols with errors. In distributed computing, protocols with errors have been widely investigated in temporal epistemic logics. Instead, we propose a dynamic epistemic logic. We show how to correct transmission errors due to faulty messages without a central authority coordinating protocol execution, how this affects optimality, and how this compares to bounded memory and full information protocols.
0
0
cs.LO 2026-05-08

New language allows automatic verification of larger quantum programs

A Practical Specification Language for Automatic Quantum Program Verification (Technical Report)

Linear-in-qubits translation to compact automata supports Hoare-style checks at new scales

Figure from the paper full image
abstract click to expand
Hoare-style verification provides a principled foundation for reasoning about the correctness of quantum programs, but existing approaches do not allow fully automatic verification. While automata-based verification scales well when specifications are given directly as automata, prior frameworks incur exponential blow-up when translating high-level set-based assertions into automata, which severely limits practicality. We introduce an extended set-based specification language and a specification-to-automata translation algorithm whose complexity is linear in the number of qubits, enabled by controlled automaton construction and qubit reordering. The resulting compact automata enable fully automatic Hoare-style verification of fixed-qubit quantum programs at previously infeasible scales, while substantially improving expressiveness without compromising efficiency.
0
0
cs.LO 2026-05-07

Fixpoint theory defines semantics for fuzzy programs with dual negation

Paraconsistent Semantics for Extended Fuzzy Logic Programs via Approximation Fixpoint Theory [Extended Version]

Approximating operators produce paraconsistent rules for programs that mix failure-based and strong negation while unifying earlier methods.

abstract click to expand
In logic programming, negation can be interpreted in various ways. Probably best known is the concept of "negation as failure", where "$\mathit{not}\, p$" is true if we have no evidence for $p$. On the other hand, strong negation requires that we have evidence for $p$ being false. Defining semantics for logic programs containing both kinds of negation is a challenging task, and this becomes even more challenging when combining this with other extensions of logic programming, e.g. fuzziness. In this work, we use the framework of approximating fixpoint theory to formulate well-behaved semantics for fuzzy logic programs containing both "by-failure" and strong negation. We show that this framework generalizes several existing semantics as well as giving rise to a host of new semantics.
0
0
cs.LO 2026-05-07

Goedel logics eliminate Delta under restricted propositional semantics

Goedel Logics: On the Elimination of The Absoluteness Operator

Atoms valued below 1 allow rewriting any formula with Delta as a disjunction of chain formulas, with semantics coinciding on Delta-freevalid

abstract click to expand
We investigate the eliminability of the absoluteness operator Delta in Goedel logics. While Delta is not definable from the standard connectives and disrupts important proof-theoretic properties, we show that it becomes eliminable at the propositional level under a restricted semantics in which all propositional atoms (except the truth constant 'True') are interpreted strictly below 1. Under this semantics, every formula containing Delta is equivalent to a disjunction of chain formulas, yielding a Delta-free normal form (standard and restricted semantics coincide w.r.t. valid formulas without Delta). We further analyze the situation in the first-order setting, where Delta-elimination fails in general due to recursion-theoretic and topological constraints, but can be recovered under witnessed semantics.
0
0
cs.LO 2026-05-07

Fixed spider-diagram rules derive all Greimas meta-terms uniformly

A diagrammatic proof-theoretic semantics for the Greimas semiotic square

Each meta-term is obtained from a basic pair by the same sequence of contour and habitat steps, giving a constructive account of semiotic +.

abstract click to expand
We develop a diagrammatic proof system for a fragment of structural semantics inspired by the Greimas semiotic square, using spider diagrams as the underlying formalism. The basic terms are represented as diagrammatic configurations, and the relations of contradiction and implication are interpreted as transformations governed by a set of inference rules. These transformations are realised as derivations, with proof trees serving as witnesses. Our main result shows that the construction of the four meta-terms can be captured uniformly: each is derivable from a conjunctive pair of basic configurations via a fixed derivation schema composed of contour introduction and habitat transformation rules. This yields a proof-theoretic account of the combinatorial operation underlying meta-term formation, and provides a semantic interpretation of the Greimasian operation `+' as a derivational construction rather than a logical combination. We further show that diagrammatic negation in this setting is not a Boolean complement, but a restricted, zone-determined semantic counter-position, reflecting the relational character of opposition in structural semiotics. The resulting framework provides a compositional, rule-based semantics in which complex configurations are generated constructively from simpler ones. In addition to extending the expressive scope of spider diagram calculi, the system illustrates how diagrammatic reasoning can be used to formalise non-classical semantic operations within a unified inferential setting.
0
0
cs.LO 2026-05-07

Stack condition decides model checking for context-free hyperproperties

Logics for Context-free Hyperproperties

The logic allows decidable verification of single-alternation information flow in recursive systems when stack height ignores later traces.

Figure from the paper full image
abstract click to expand
We introduce a novel logic for the specification of context-free hyperproperties, which capture, e.g., the flow of information in security-critical recursive systems. Intuitively, the logic extends visibly pushdown automata by quantification over traces, just like HyperLTL, the most important logic for regular hyperproperties, extends LTL by quantification over traces. Using a game-based approach, we show that model-checking is decidable for formulas with a single quantifier alternation, provided the stack height of the visibly pushdown automaton only depends on the traces bound to the variables of the first quantifier block. A single quantifier alternation suffices to express many information-flow properties studied in the literature. Complementarily, we show that model-checking is undecidable for formulas with a single quantifier alternation, if the stack behavior of the visibly pushdown automaton may depend on the second quantifier block. This also implies that model-checking is undecidable for almost all fragments with more than one quantifier alternation.
0
0
cs.LO 2026-05-07

Coalition abilities split into four exhaustive categories including full inability

Beyond Ability: The Four-Fold Spectrum of Power and the Logic of Full Inability

Full inability captures the case where a group forces neither a proposition nor its opposite, and the resulting logic stays PSPACE-complete.

Figure from the paper full image
abstract click to expand
Coalition Logic studies what coalitions can enforce. Recent work treats inability as simple non-ability: $\neg\Eff{C}\varphi$. This conflates two distinct configurations -- a coalition unable to force $\varphi$ may still force $\neg\varphi$, retaining adversarial control rather than genuine inability. We introduce \textbf{Full Inability} ($\FI$): the symmetric condition in which a coalition can enforce neither a proposition nor its negation. Combining coalitional effectivity with propositional negation yields a four-fold spectrum: \textbf{Full Control} ($\FC$), \textbf{Positive Determination} ($\PD$), \textbf{Adverse Determination} ($\AD$), and \textbf{Full Inability} ($\FI$). These categories partition a coalition's strategic status exhaustively and exclusively. We establish their algebraic and order-theoretic structure. Under $\alpha$-duality, propositional negation and coalition complementation generate a Klein four-group symmetry. In playable models, the four power regions are order-convex in the powerset lattice, yielding interval-stable verification of inability. We axiomatize $\CLFI$, a definitional extension treating Full Inability as a primitive modality. Via elimination translation, we prove soundness, completeness, and conservativity over Coalition Logic. The extension preserves expressive power and complexity ($\PSPACE$-complete), but provides direct proof-theoretic access to symmetric inability, strategic dependence, propositional dummyhood, and containment verification.
0
0
cs.LO 2026-05-06

Probabilistic floating-point error bounds computed far faster

Probabilistic Floating-Point Round-Off Analysis via Concentration Inequalities

Over-approximated Taylor expansions plus concentration inequalities produce tight thresholds with far less computation than prior techniques

abstract click to expand
Floating-point round-off errors are ubiquitous in numerically intensive programs arising in fields such as scientific computing and optimization. As floating-point errors potentially lead to unexpected and catastrophic program failures, one must derive guaranteed round-off thresholds to ensure the correctness of these programs. However, deterministic round-off thresholds tend to be too conservative to be usable in practice, since they often involve large round-off errors that occur with small probability. Probabilistic thresholds relax deterministic ones by specifying that the probability of the round-off error exceeding a threshold is below a given confidence. In this work, we propose a novel approach to probabilistic round-off analysis, by applying concentration inequalities over the Taylor expansion from FPTaylor (TOPLAS 2018). A major obstacle in applying concentration inequalities is that the Taylor expansion involves absolute value operators that make the calculation of the expected values of the first order partial differential terms difficult. Our first step to overcome this obstacle is a sound over-approximation that removes the absolute value operators in polynomial expressions. Then, we show how to handle fractional expressions by a transformation into polynomial case. Finally, we show how to improve our approach with range partitioning. Our approach is scalable since the key computational part is the calculation of expected values of polynomial expressions with independent variables, for which the linear and independence properties of expectation boost the computation. Experimental results show that our approach is orders of magnitude more time efficient, while producing thresholds with comparable precision against the state of the art.
0
0
cs.LO 2026-05-06

First self-certifying model checker for CTL properties

iSMC: A BDD-based Symbolic Model Checker with Interactive Certification

iSMC solves arbitrary Computation Tree Logic instances with justice and certifies answers interactively at user-selected high probability.

Figure from the paper full image
abstract click to expand
We present iSMC, the first self-certifying model checker with interactive certification, a certification paradigm based on the theory of interactive proof systems. iSMC is a symbolic BDD-based model checker for arbitrary properties of Computation Tree Logic (CTL) with justice requirements. After solving an instance of the model-checking problem, iSMC conducts a certification procedure that guarantees with high probability (chosen by the user) that the answer is correct. iSMC is based on the technology of the QBF-solver with interactive certification presented by Couillard et al. at CAV 2023. We extend, improve on, and re-implement this technology, adapting it to the needs of CTL model checking.
0
0
cs.LO 2026-05-06

Incremental SAT solves 1.5 times more hard bounds with backtrackable inprocessing

Backtrackable Inprocessing

The framework removes the global-level restriction on inprocessing while keeping backtracking sound, improving results on model-checking BMC

Figure from the paper full image
abstract click to expand
We introduce Backtrackable Inprocessing (BI), a framework that enables applying inprocessing under the current trail at any decision level, at any point during incremental SAT solving. Our approach lifts the long-standing restriction that inprocessing must be performed only at the global decision level, thereby substantially increasing its potential effectiveness. We focus on three highly efficient core techniques: subsumption, self-subsuming resolution, and Bounded Variable Elimination (BVE). We show how to ensure sound backtracking in the presence of inprocessing, and demonstrate that applying BI for incremental preprocessing after propagating assumptions yields significant performance improvements on Bounded Model Checking (BMC) benchmarks from the Hardware Model Checking Competition 2017. Implemented in the Island SAT solver (IntelSAT's fork), BI enables solving $\sim$1.5$\times$ as many difficult bounds as the baseline global-level incremental preprocessor.
0
0
cs.LO 2026-05-06

Induction restriction yields compact sequent system for Transition Algebra

Induction rules for Transition Algebra

The change produces completeness under new semantics and a model-theoretic Craig interpolation proof for rewriting logics.

abstract click to expand
Transition Algebra (TA) is a type of infinite logic introduced to discuss rewriting systems. The natural deductive proof systems already introduced in TA satisfy completeness for countable signatures. However, it lacks compactness, making it unsuitable for practical applications. This time, we will create a compact proof system by restricting the (Star) rule to induction. We will also use a sequent proof system instead of natural deduction. Furthermore, we will introduce a semantics that makes this system complete, and its application (a model-theoretic proof of Craig interpolation).
0
0
cs.LO 2026-05-06

Minimalist logic programs split into Krom pieces with exact model recovery

Set-like operations on propositional logic programs

Set-like operations let the least model of a full program be rebuilt from the least models of its simpler Krom components.

abstract click to expand
A systematic algebraic framework for composing and decomposing logic programs is currently missing, limiting our ability to analyze and construct programs in a modular way. In this paper, we introduce set-like operations for (propositional Horn) logic programs that allow for a structured manipulation of rule bodies. Our main technical result shows that programs can be decomposed into simpler components in such a way that their least model semantics can be reconstructed or approximated from the semantics of these components. In particular, we prove that every minimalist program can be decomposed into Krom programs -- consisting only of rules with at most one body atom -- such that its least model can be computed from the least models of its components. For arbitrary programs, we obtain corresponding approximation results. These results provide a new algebraic perspective on logic programs and lay the groundwork for compositional reasoning and program construction.
0
0
cs.LO 2026-05-06

D-institutions model variables directly via generalized functor categories

A formulation of D-institution using functor categories

Signature extensions are replaced by a functor that adds compound sentences, yielding a complete proof system for predicate logics.

abstract click to expand
Variables are a crucial element in logic and are also addressed in institution theory, an effort to axiomatize logic. In institution theory, we typically use extensions (signature morphisms) obtained from variables instead of introducing variables directly. While this approach appears simple at first glance because it does not introduce new structures, it often requires numerous conditions to describe variable structures, which can actually complicate the discussion. In this paper, we propose introducing variable structures directly by utilizing a generalization of category of functors. We define a category of predicate logics and formulate the introduction of compound sentences as a functor. We also introduce a proof system and prove a completeness theorem.
0
0
cs.LO 2026-05-06

Direct algorithm cuts C² model counting to linear in counting parameters

A Fast Model Counting Algorithm for Two-Variable Logic with Counting and Modulo Counting Quantifiers

Retaining counting quantifiers in Scott normal form also proves tractability for the modulo extension and yields large practical speedups.

Figure from the paper full image
abstract click to expand
Weighted first-order model counting (WFOMC) is a central task in lifted probabilistic inference: It asks for the weighted sum of all models of a first-order sentence over a finite domain. A long line of work has identified domain-liftable fragments of first-order logic, that is, syntactic classes for which WFOMC can be solved in time polynomial in the domain size. Among them, the two-variable fragment with counting quantifiers, $\mathbf{C}^2$, is one of the most expressive known liftable fragments. Existing algorithms for $\mathbf{C}^2$, however, establish tractability through multi-stage reductions that eliminate counting quantifiers via cardinality constraints, which introduces substantial practical overhead as the domain size grows. In this paper, we introduce IncrementalWFOMC3, a lifted algorithm for WFOMC on $\mathbf{C}^2$ and its modulo counting extension, $\mathbf{C}^2_{\text{mod}}$. Instead of relying on reduction techniques, IncrementalWFOMC3 operates directly on a Scott normal form that retains counting quantifiers throughout inference. This direct treatment yields two main results. First, we derive a tighter data-complexity bound for WFOMC in $\mathbf{C}^2$, reducing the degree of the polynomial from quadratic to linear in the counting parameters. Second, we prove that $\mathbf{C}^2_{\text{mod}}$ is domain-liftable, extending tractability from $\mathbf{C}^2$ to a richer fragment with native modulo counting support. Finally, our empirical evaluation shows that IncrementalWFOMC3 delivers orders-of-magnitude runtime improvements and better scalability than both existing WFOMC algorithms and state-of-the-art propositional model counters.
0
0
cs.LO 2026-05-05

Algebra derives fixed point theorems by equations only

The Algebra of Iterative Constructions

The algebra of iterative constructions proves Kleene, Tarski-Kantorovich and a new limit-based theorem without computing explicit iteration,

Figure from the paper full image
abstract click to expand
Fixed points are a recurring theme in computer science and are often constructed as limits of suitably seeded fixed point iterations. We present the algebra of iterative constructions (AIC) -- a purely algebraic approach to reasoning about fixed point iterations of continuous endomaps on complete lattices. AIC allows derivations of constructive fixed point theorems via equational logic and avoids explicit computations with indices. For example, $$F \,\Diamond\, F^{*} \bot = \Diamond\, F^{*} \bot$$ states in AIC that $\sup_n F^n (\bot)$ -- a construction known from the Kleene fixed point theorem -- is a fixed point of $F$. We demonstrate the applicability of AIC by providing algebraic proofs of several well- and less-well-known fixed point theorems: Among others, we prove the Tarski-Kantorovich principle -- a generalization of the Kleene fixed point theorem -- as well as a fixed point-theoretic generalization of $k$-induction -- a technique used in software verification. We moreover present a novel fixed point theorem. Under suitable continuity conditions, it obtains fixed points as lattice-theoretic limit inferiors and limit superiors of iterating an endomap on an arbitrary seed element. We have mechanized our algebra in Isabelle/HOL. Isabelle's sledgehammer tool is able to find proofs of the above fixed point theorems fully automatically. Finally, we investigate the completeness of our axiomatization of AIC. We prove that our finite set of finitary axioms is (a) sound but incomplete for standard models of AIC (sequences of elements from a complete lattice) and that (b) a different finite set of infinitary axioms is complete. We also prove that infinitary axioms are unavoidable: there exists no complete axiomatization of standard models given by finitely many finitary axioms.
0
0
cs.LO 2026-05-05

Rational-weight ReLU networks equal fuzzy logic formulas

Neural networks as fuzzy logic formulas

ReLU networks with rational weights translate exactly into formulas from Rational Pavelka Logic and LΠ1/2 fragments, supporting arbitrary re

Figure from the paper full image
abstract click to expand
Neural networks are a fundamental aspect of modern artificial intelligence, playing a key role in various important machine learning architectures including transformers and graph neural networks. Recently, logical characterisations have been used to study the expressive power of many machine learning architectures, but logical characterisations of plain neural networks have received less attention. In this paper, we provide fuzzy logic characterisations of rational-weight ReLU-activated neural networks via two well-established fuzzy logics: Rational Pavelka Logic RPL (and extensions thereof) and (fragments of) $\mathit{L \Pi} \frac{1}{2}$. The activation values of the neural networks are allowed to be arbitrary real numbers. We also provide fuzzy logic characterisations of a generalised polynomial ring over $\mathbb{Q}$ in countably many variables where the use of the ReLU-function is permitted.
0
0
cs.LO 2026-05-05

SHACL implication decidable in exponential time for well-founded models

Static Analysis of Recursive SHACL

A translation to hybrid mu-calculus yields the procedure, while supported and stable model semantics make the problem undecidable even for a

Figure from the paper full image
abstract click to expand
SHACL (Shapes Constraint Language) expresses constraints on RDF data by means of so-called shapes. Its central service is validation: verifying whether a data graph complies with a SHACL document. But so far, there are no static analysis services to compare documents. In this paper, we study the following problem: decide whether all graphs that validate one SHACL document also validate another. Unlike previous works that have considered the implication of shape expressions only, we consider documents comprising (recursive) shape definitions and targets. We show that implication (a.k.a. containment) is undecidable under the supported and the stable model semantics, even for the fragment that uses the description logic ALCIO for shape expressions. Under the well-founded semantics, in surprising contrast, it is decidable in single exponential time. Our key technical contribution is a translation of SHACL under the well-founded semantics into the full hybrid mu-calculus, revealing a novel link between well-founded models and a fixed point modal logic, and a worst-case optimal automata-based decision procedure.
0
0
cs.LO 2026-05-05 3 theorems

Formal proof gives SIR epidemic model global forward flow and invariance

Certified Qualitative Analysis of the SIR ODE and Reusable Scalar Lemmas in Isabelle/HOL

Isabelle/HOL reuses scalar lemmas to lift local AFP flows to unique solutions on any [0,b], proving conservation and threshold monotonicity.

Figure from the paper full image
abstract click to expand
We present a mechanically checked Isabelle/HOL bridge from the Picard-Lindelof flow infrastructure in the Archive of Formal Proofs (AFP) to selected qualitative facts for the mass-action, closed-population SIR epidemic ODE. The epidemiological facts are classical; the contribution is reusable theorem infrastructure connecting the AFP local-flow construction to global forward existence, uniqueness, forward invariance of the nonnegative orthant, conservation, monotonicity, the Kermack-McKendrick conserved phase-plane relation, compartment bounds, and threshold-ratio conditions for infectious growth and monotonicity. The proof first establishes sign and conservation facts for local AFP flow segments, then uses the conserved nonnegative simplex as the compactness witness for extending the flow to all forward times. The finite-interval qualitative facts are then transferred to the unique AFP forward flow on arbitrary intervals [0,b] with b>0, so the results apply to the constructed Isabelle/AFP SIR solution rather than to an assumed trajectory. The reusable layer provides homogeneous-linear scalar compartment lemmas for equations X'(t)=f(t)X(t), derivative-sign monotonicity, three-compartment conservation, and an SIR transfer bridge to the AFP flow infrastructure. We do not formalize stability, final-size, or asymptotic theory. The accompanying Isabelle artifact builds with Isabelle 2024 and AFP 2024 and contains no sorry or oops proof placeholders.
0
0
cs.LO 2026-05-05 2 theorems

Ecumenical systems preserve Glivenko's logic translations

Glivenko's theorems from an ecumenical perspective

Three natural deduction setups show the classical-intuitionistic links survive when both reasoning styles are used together.

Figure from the paper full image
abstract click to expand
In this paper, we revisit Glivenko's theorems, foundational results relating classical and intuitionistic logic, from an ecumenical perspective. We begin by discussing the historical context and significance of Glivenko's original contributions, and then examine their extensions and reinterpretations within ecumenical logical frameworks. Our analysis focuses on three ecumenical systems: Prawitz's natural deduction system NE; the system NEK, closely related to one introduced by Krauss in an unpublished manuscript; and the ECI system proposed by Barroso-Nascimento.
0
0
cs.LO 2026-05-05 2 theorems

One relation characterises must-preorder in all sync and async settings

A uniform characterisation of the (a)synchronous must-preorder

Label abstractions unify soundness and completeness proofs for value-passing and pure calculi

Figure from the paper full image
abstract click to expand
In the setting of message passing software, De Nicola and Hennessy must-preorder defines when a program improves on another one. Since this preorder does not come equipped with a viable proof method, using it requires an alternative relation that characterises it. The literature presents at least four different definitions of such alternative preorders, depending on whether communication is synchronous or asynchronous and on whether there is value-passing or not. The existence of these different definitions complicates the overall theory, hinders the development of tools, and, upon the whole, suggests a lack of understanding of the properties necessary and sufficient to reason on the must-preorder. This paper presents the first alternative characterisation that works at least in all the four settings mentioned above. We achieve this result thanks to an axiomatic approach that is calculus independent, by highlighting the role of blocking and non-blocking actions, and by introducing the novel notion of label abstraction. Label abstractions capture the essence of safe substitutivity w.r.t. interactions, and they let us obtain a unique proof of soundness and a unique proof of completeness that work in all the mentioned settings. We believe this generalises and simplifies the overall theory, while letting us present the existing results in a uniform way. Our proofs are constructive and our result is entirely mechanised in Rocq.
0
0
cs.LO 2026-05-05 3 theorems

Lean 4 counter-model blocks Spinoza Prop V even with substantive PSR

Bennett's Conjecture in Lean 4: Counter-Models for the PSR-Reducibility of Spinoza's Propositions V and XIV

A four-element structure obeys the axioms plus Della Rocca's PSR yet contains distinct substances that share an attribute, confirming the un

abstract click to expand
In A Study of Spinoza's Ethics (1984, {\S}17), Jonathan Bennett argues that the demonstration of Proposition V of Spinoza's Ethica contains identifiable invalid moves and that, even granted those moves, "cannot yield more than the conclusion that two substances could not have all their attributes in common" -- while Spinoza concludes that they cannot share any. Bennett doubts that any valid reconstruction is available from Spinoza's stated resources without importing further commitments. Michael Della Rocca (Spinoza, 2008, ch. 2) responds that the proposition can be derived if the Principle of Sufficient Reason (PSR) is committed substantively. The debate has remained at the level of prose argument for forty years. This paper provides the first machine-checked evidence in the debate. We formalise Ethica Pars I in Lean 4, encoding Bennett's reading of Spinoza's stated axioms as a typeclass and Della Rocca's substantive PSR as an extension class. The derivation attempt yields a partial result -- substances sharing all attributes are identical -- but cannot reach the full "sharing-any-attribute -> identity" content of Proposition V, mechanically tracking Bennett's own all-attributes ceiling. A four-element counter-model satisfying both axiom sets while falsifying Proposition V's content establishes the irreducibility against this specific augmentation. A second counter-model establishes the analogous result for axiom A15, a load-bearing universality clause for Spinoza's Proposition XIV. Bennett's diagnosis receives its first kernel-level confirmation against the Della-Rocca PSR-substance reconstruction; stronger PSR variants remain open as future mechanical projects.
0
0
cs.LO 2026-05-04 2 theorems

Compiled forms enable direct projection for quantifiers in alternating automata

Knowledge Compilation for Quantification in Alternating Automata

Normal forms let both existential and universal projections run uniformly on transitions, removing the need for complementation when wiping

Figure from the paper full image
abstract click to expand
We present a knowledge compilation approach for existential and universal quantification in alternating automata. Knowledge compilation transforms formulas into normal forms with special properties that enable efficient answering of questions of interest. For Boolean formulas, several normal forms that have proven effective for existential/universal quantification, and even for functional synthesis, have been studied in the literature. For infinite word automata, quantification is a fundamental operation in verification tasks such as QPTL satisfiability checking and HyperLTL model checking. Existing algorithms rely on nondeterministic infinite word automata, where existential projection can be efficiently performed state-wise, but universal projection requires complementation. Complementing nondeterministic infinite word automata, however, is expensive in practice, making existing algorithms infeasible for automata in practice. Towards addressing this problem, we propose novel knowledge compilation techniques for existential and universal quantification on alternating safety automata. Our approach compiles alternating automata into normal forms where projection can be applied uniformly and efficiently to each state's transition function. Using the compilations for each type of quantification, we can effectively eliminate a sequence of alternating quantifiers in formulas without complementation. Our BDD-based prototype demonstrates the practical effectiveness of our algorithms on a suite of QPTL satisfiability benchmarks.
0
0
cs.LO 2026-05-04

SMT encoding turns RNmatrix semantics into practical provers

Efficient Decision Procedures for RNmatrix Semantics

Translating restricted non-deterministic matrices into satisfiability constraints yields faster validity checks and countermodels for parac

Figure from the paper full image
abstract click to expand
Restricted non-deterministic matrices (RNmatrices) impose constraints on the rows of non-deterministic matrices (Nmatrices), filtering out "unsound" rows and retaining only "valid" ones. This yields a more expressive framework than standard Nmatrices. Although this approach enables sound and complete semantics for a broad class of logics, \eg, paraconsistent logics, propositional intuitionistic logic, and the fifteen normal modal logics of the modal cube, no {\em efficient} decision procedures based on these semantics have been proposed. In this paper, we implement the RNmatrix framework to develop a new suite of automated theorem provers for these logics. By encoding RNmatrices and their elimination criteria as Satisfiability Modulo Theories (SMT) problems, we leverage SMT solvers to decide formula validity and construct countermodels. We illustrate the method for paraconsistent logics, where our prover outperforms the current state-of-the-art and provides the first implementation for the entire $C_n$ hierarchy, as well as for intuitionistic and modal logics, where our general-purpose prover achieves competitive performance.
0
0
cs.LO 2026-05-04

Collusivity identifies balanced attack networks without symmetry

Collusion Relations and their Applications to Balance Theory

A quadrangular property on relations interpreted as attacks generalizes the classical balance theorem to non-symmetric cases.

Figure from the paper full image
abstract click to expand
We study quadrangular properties of binary relations on a set $X$~--i.e., properties defined on configurations of four elements--~within an agonistic interpretation, where $xRy$ is interpreted as $x$ ``attacks''~$y$. Such relations induce a suitable notion of ``protection,'' and we provide necessary and sufficient conditions for this notion to be consistent. We characterize the balance property in signed frames in terms of a specific quadrangular property, namely collusivity. In this way, we generalize a classical result in balance theory by offering an alternative method for determining whether a network is polarized. That is, one can identify well-formed groups of agents that agree with one another within the same group (a set of allies) while disagreeing with, or attacking, agents outside the group. Furthermore, we extend the balance theorem to non-symmetric relations, thereby relaxing a condition required in standard balance theory. We conclude by giving a modal characterization of collusive frames, together with corresponding rules in a labeled sequent calculus, and we show that previous modal characterizations of balance are derivable within this system.
0
0
cs.LO 2026-05-04

ABox abduction complexity mapped for inconsistent KBs

ABox Abduction for Inconsistent Knowledge Bases under Repair Semantics

Repair semantics make the problem well-defined, with complete complexity classifications for variants in DL-Lite and EL_bot.

abstract click to expand
Given a knowledge base (KB) with a non-entailed fact, the ABox abduction problem asks for possible extensions of the KB that would entail this fact. This problem has many applications, ranging from diagnosis to explainability and repair. ABox abduction has been well-investigated for consistent KBs and classical semantics, but little is known for the case of inconsistent KBs, which can be caused by erroneous data. In this paper we define suitable notions of abduction in this setting and propose criteria that guide abduction towards "useful" hypotheses. To regain meaningful reasoning in the presence of inconsistencies, we use well-established repair semantics. We provide a comprehensive landscape of the complexity of ABox abduction under repair semantics, treating different variants of the abduction problem for the light-weight description logics DL-Lite and EL_bot.
0
0
cs.LO 2026-05-04

Lean 4 delivers sorry-free proof of Stokes theorem for singular cubes

Stokes' Theorem for Smooth Singular Cubes in Lean 4: True Pullback, Bridges to mathlib4, and Chain-Level d²=0

The formalization uses genuine Fréchet pullbacks and proves chain-level properties including d squared equals zero

abstract click to expand
We present a sorry-free Lean 4/mathlib4 formalization of Stokes' theorem for smooth singular cubes in arbitrary dimension, using true differential-form pullback via the Frechet derivative. The development also includes a bridge to mathlib4's abstract extDeriv, chain-level Stokes extended by Z-linearity, d^2=0 for singular cubical chains, box Stokes for axis-aligned cubes, dimensional specializations, and a structured comparison with Harrison's HOL Light formalization.
0
0
cs.LO 2026-05-04

Categorical univalence does not imply function extensionality

Univalence without function extensionality

Polynomial models over a univalent base keep a categorically univalent universe while refuting extensional equality of functions.

abstract click to expand
It is a well-known theorem of homotopy type theory, originally due to Voevodsky, that function extensionality holds inside any univalent universe. We consider a weaker variant of the univalence axiom, asserting that the wild category formed by the universe is univalent, which we call categorical univalence. We show that categorical univalence does not imply function extensionality by an analysis of Von Glehn's polynomial model construction, which produces models of Martin-L\"of type theory that always refute function extensionality. We find in particular that when the base model has a univalent universe, its polynomial model has a universe that is categorically univalent but lacks function extensionality.
0
0
cs.LO 2026-05-04

Cache reuses work across similar #SAT instances

Efficient Incremental #SAT via Cross-Instance Knowledge Reuse

Persistent storage of component data speeds up model counting for sequences of structurally close formulas in argumentation and soft core.

Figure from the paper full image
abstract click to expand
Model counting ($\#\text{SAT}$) is a fundamental yet $\#\text{P}$-complete problem central to probabilistic reasoning. In this work, we address \textit{incremental model counting}, where sequences of structurally similar formulas must be counted. We propose an approach that amortizes computation via a persistent caching mechanism, retaining component data across solver calls to avoid redundant search. Additionally, we investigate branching heuristics adapted for this setting. We focus on the problems of argumentation and soft core, for which incremental model counting is natural. Experiments demonstrate that our method improves performance compared to current model counters, highlighting the capability of structure-aware reuse in dynamic environments.
0
0
cs.LO 2026-05-01

Polymorphic DHOL translates to HOL for automation

Polymorphism Meets DHOL

Syntax and semantics for the extension let existing HOL provers handle dependent polymorphic theories on TPTP examples.

Figure from the paper full image
abstract click to expand
DHOL is an extensional, classical logic that equips the well-known higher-order logic (HOL) with dependent types. This allows for concise encodings of important domains like size-bounded data structures, category theory, or proof theory. Automation support is obtained by translating DHOL to HOL, for which powerful modern automated theorem provers are available. However, a critically missing feature of DHOL is polymorphism. We develop the syntax and semantics of polymorphic DHOL and extend the translation accordingly. We implement the translation in the logic-embedding tool and evaluate it on a range of TPTP formalizations. The logic-embedding tool, together with an off-the-shelf HOL theorem prover easily creates a PDHOL theorem prover for experimenting.
0
0
cs.LO 2026-05-01

CMSO low-monodimensionality fragments admit FPT model checking on minor-free graphs

Model Checking for Low Monodimensionality Fragments of CMSO on Topological-Minor-Free Graph Classes

The result lifts meta-theorems from first-order logic to a richer counting monadic second-order fragment on topological-minor-free classes.

abstract click to expand
Algorithmic meta-theorems explain the tractability of large classes of computational problems by linking logical expressibility with structural graph properties. While extensions of first-order logic such as FO+dp admit efficient model checking on graph classes excluding a fixed topological minor, comparable results for richer fragments of CMSO were previously unknown. We further develop the framework of Sau, Stamoulis, and Thilikos [SODA 2025] for fragmenting CMSO via annotated graph parameters, which restrict set quantification to vertex sets satisfying bounded structural conditions. Following this approach, we identify a fragment of CMSO, namely the one defined by allowing quantification only over sets having what we call low monodimensionality, that generalizes several previously-known logics and we show that model checking for this fragment, enhanced with the disjoint-paths predicate, is fixed-parameter tractable on topological-minor-free graph classes. Such classes essentially delimit the tractability for this logic on subgraph-closed classes. As a consequence, our results lift several known algorithmic meta-theorems beyond first-order logic to the topological-minor-free setting.
0
0
cs.LO 2026-05-01

Semantic Numeration Systems defined for positive rationals

Non-negative Rational Semantic Numeration Systems

Cardinal operators set common carry and remainder rules, yielding dynamical properties and partial integer extensions.

abstract click to expand
A new class of Semantic Numeration Systems, namely, positive rational Semantic Numeration Systems is introduced. For cardinal semantic operators, differences in the formation of carry (common carry) and remainders are defined. The properties of positive rational Semantic Numeration Systems as dynamical systems are formulated and illustrated through analytical and numerical examples. A first attempt at defining partial integer Semantic Numeration Systems is proposed.
0
0
cs.LO 2026-05-01

Pipeline turns human safety goals into verified logical rules

Towards Neuro-symbolic Causal Rule Synthesis, Verification, and Evaluation Grounded in Legal and Safety Principles

In autonomous driving tests the system derives minimal necessary and sufficient first-order constraints that align with supplied principles.

Figure from the paper full image
abstract click to expand
Rule-based systems remain central in safety-critical domains but often struggle with scalability, brittleness, and goal misspecification. These limitations can lead to reward hacking and failures in formal verification, as AI systems tend to optimize for narrow objectives. In previous research, we developed a neuro-symbolic causal framework that integrates first-order logic abduction trees, structural causal models, and deep reinforcement learning within a MAPE-K loop to provide explainable adaptations under distribution shifts. In this paper, we extend that framework by introducing a meta-level layer designed to mitigate goal misspecification and support scalable rule maintenance. This layer consists of a Goal/Rule Synthesizer and a Rule Verification Engine, which iteratively refine a formal rule theory from high-level natural-language goals and principles provided by human experts. The synthesis pipeline employs large language models (LLMs) to: (1) decompose goals into candidate causes, (2) consolidate semantics to remove redundancies, (3) translate them into candidate first-order rules, and (4) compose necessary and sufficient causal sets. The verification pipeline then performs (1) syntax and schema validation, (2) logical consistency analysis, and (3) safety and invariant checks before integrating verified rules into the knowledge base. We evaluated our approach with a proof-of-concept implementation in two autonomous driving scenarios. Results indicate that, given human-specified goals and principles, the pipeline can successfully derive minimal necessary and sufficient rule sets and formalize them as logical constraints. These findings suggest that the pipeline supports incremental, modular, and traceable rule synthesis grounded in established legal and safety principles.
0
0
cs.LO 2026-05-01 Recognition

LLM pipeline turns human goals into verified logical rules

Towards Neuro-symbolic Causal Rule Synthesis, Verification, and Evaluation Grounded in Legal and Safety Principles

In two autonomous-driving cases the system produced minimal necessary and sufficient rule sets that passed consistency and safety checks.

Figure from the paper full image
abstract click to expand
Rule-based systems remain central in safety-critical domains but often struggle with scalability, brittleness, and goal misspecification. These limitations can lead to reward hacking and failures in formal verification, as AI systems tend to optimize for narrow objectives. In previous research, we developed a neuro-symbolic causal framework that integrates first-order logic abduction trees, structural causal models, and deep reinforcement learning within a MAPE-K loop to provide explainable adaptations under distribution shifts. In this paper, we extend that framework by introducing a meta-level layer designed to mitigate goal misspecification and support scalable rule maintenance. This layer consists of a Goal/Rule Synthesizer and a Rule Verification Engine, which iteratively refine a formal rule theory from high-level natural-language goals and principles provided by human experts. The synthesis pipeline employs large language models (LLMs) to: (1) decompose goals into candidate causes, (2) consolidate semantics to remove redundancies, (3) translate them into candidate first-order rules, and (4) compose necessary and sufficient causal sets. The verification pipeline then performs (1) syntax and schema validation, (2) logical consistency analysis, and (3) safety and invariant checks before integrating verified rules into the knowledge base. We evaluated our approach with a proof-of-concept implementation in two autonomous driving scenarios. Results indicate that, given human-specified goals and principles, the pipeline can successfully derive minimal necessary and sufficient rule sets and formalize them as logical constraints. These findings suggest that the pipeline supports incremental, modular, and traceable rule synthesis grounded in established legal and safety principles.
0
0
cs.LO 2026-05-01

Algebraic generating functions decide termination for probabilistic programs

On Higher-Order Probabilistic Verification via the Weighted Relational Model of Linear Logic

Interpreting schemes via weighted relational semantics shows the functions solve polynomial equations, enabling checks for probability-one终止

Figure from the paper full image
abstract click to expand
The problem of determining whether a probabilistic program terminates almost surely (i.e.~with probability one) is undecidable, and actually $\Pi^0_2$-complete. For this reason, a growing literature has explored classes of programs for which this and related problems can be shown (semi-)decidable. In this work we consider the termination problem for the language of Probabilistic Higher-Order Recursion Schemes (PHORS). Using the weighted relational semantics of linear logic, we translate this problem into the computation of suitable generating functions associated with the program interpreted. This way, we establish the decidability of almost sure termination for a class of programs that extends Li et al.'s affine PHORS via a type discipline with bounded exponentials. To achieve this, we show that the generating functions for such programs are always algebraic, that is, solutions of polynomial equations, yielding an effective method to answer the termination problem.
0
0
cs.LO 2026-05-01

SCAN extended to compute witnesses for second-order quantifiers

Computing Witnesses Using the SCAN Algorithm

Modifications to the saturation process on clause sets produce an explicit witness that yields an equivalent first-order formula.

abstract click to expand
Second-order quantifier elimination is the problem of finding, given a formula with second-order quantifiers, a logically equivalent first-order formula. While such formulas are not computable in general, there are practical algorithms and subclasses with applications throughout computational logic. One of the most prominent algorithms for second-order quantifier elimination is the saturation-based SCAN algorithm. In this paper we show how the SCAN algorithm on clause sets can be extended to solve a more general problem: namely, finding a witness for the second-order quantifiers that results in a logically equivalent first-order formula. In addition, we provide a prototype implementation of the proposed method.
0
0
cs.LO 2026-05-01

Coalition Logic gains explicit inability as negated ability

A Logic of Inability

The conservative extension reveals anti-monotonicity, asymmetric connectives, failed superadditivity, and grand-coalition inability as true,

abstract click to expand
Coalition Logic is primarily concerned with what coalitions can achieve, whereas what coalitions cannot achieve -- their \emph{inability} -- has received comparatively little explicit attention. This asymmetry matters in artificial intelligence and safety-critical multi-agent systems, where one often needs to specify not merely what agents are instructed or disposed not to do, but what they are \emph{unable} to bring about. We develop a conservative extension of Coalition Logic with an explicit inability operator, interpreted as the negation of coalition ability. This operator is introduced as a conservative and formally tractable starting point for studying inability as a modal concept in its own right. We prove soundness, completeness, and conservativity over standard Coalition Logic, and analyse the resulting modal behaviour: anti-monotonicity with respect to coalition inclusion, contravariance with respect to goal strength, asymmetric interaction with conjunction and disjunction, failure of superadditivity, non-equivalence with opponent ability, and the connection between grand-coalition inability and systemic impossibility. Making this definable operator explicit reveals a systematic modal structure governing the limits of agency, and supports reasoning about constraints, negative capabilities, and impossibility in multi-agent systems.
1 0
0
cs.LO 2026-05-01

Order-invariant cluster logic equals plain FO on bounded-degree graphs

Order-invariant cluster first-order logic on graph classes of bounded degree

Explicit construction of similarity-preserving linear orders shows invariance adds no extra power under degree bounds.

abstract click to expand
We introduce a new logic, called \emph{cluster first-order logic}, a restricted fragment of first-order logic specifically designed to study order invariance. An order-invariant formula is one on a vocabulary that contains an order; however, whether a structure satisfies it or not is independent of the interpretation of the order. We show that while order-invariant cluster first-order logic can define properties outside the scope of plain first-order logic in general, its expressive power is included in that of first-order logic when it comes to classes of bounded degree. We establish this result by explicitly constructing linear orders such that similar structures remain similar when they are expanded with these orders. This similarity-preserving, local-to-global approach is technically involved and somewhat counterintuitive, since adding an order usually reveals distinctions that are otherwise hidden due to the locality of first-order logic. We believe that this work can be a stepping stone toward applying such techniques to plain first-order logic and toward settling the question of the expressive power of order-invariant plain first-order logic.
0
0
cs.LO 2026-05-01

BDD solver enumerates all ADF models on large biological networks

BAss: Symbolic Reasoning in Abstract Dialectical Frameworks

BAss finishes complete sets of interpretations where prior BDD tools and many SAT solvers run out of memory or time.

Figure from the paper full image
abstract click to expand
We present BAss (BDD-based ADF symbolic solver), a novel analysis tool for Abstract Dialectical Frameworks (ADFs) based on Binary Decision Diagrams (BDDs). It supports the fully symbolic computation of all admissible, complete, and preferred interpretations, as well as two-valued and stable models of an ADFs. Our approach is inspired by the recently discovered equivalence between Boolean Networks (BNs) and ADFs by Heyninck et al. (2024) and Azpeitia et al. (2024), significantly extending current BDD-based tools bioLQM, AEON, and adf-bdd. We conducted experiments on a large-scale collection of real-world models from both the BN and ADF communities. Our results show that BAss dramatically outperforms previous BDD-based tools and is competitive (even significantly better in some cases) with state-of-the-art SAT/ASP-based methods, particularly in scenarios involving large solution spaces. Notably, BAss is able to enumerate all fixed points or minimal trap spaces of certain biological networks beyond the reach of existing tools, thereby enabling new analysis and case studies in systems biology. These results highlight the practical relevance of symbolic reasoning for complex real-world applications, particularly in systems biology and formal argumentation.
1 0
0
cs.LO 2026-04-30

String diagrams axiomatize behavioral distances for nondeterministic processes

A Diagrammatic Axiomatisation of Behavioural Distance of Nondeterministic Processes

Complete rules using Milner's charts let distances be derived without variables or binders.

abstract click to expand
Behavioural distances provide a quantitative approach to comparing the states of transition systems, moving beyond traditional Boolean notions of equivalence. In this paper, we develop a sound and complete axiomatisation of behavioural distance for nondeterministic processes using Milner's charts, a model that generalises finite-state automata by incorporating variable outputs. Charts provide a compelling setting for studying behavioural distances because they shift the focus from language equivalence to bisimilarity. Their axiomatic study lays the groundwork for quantitative analysis of more expressive models, such as weighted transition systems. To formalise this approach, we adopt string diagrams as our syntax of choice. String diagrams closely mirror the graphical structure of charts, while providing a rigorous formalism that supports inductive reasoning and compositional semantics. Unlike traditional algebraic syntaxes, which require additional mechanisms such as binders and substitution, string diagrams offer a variable-free representation where recursion naturally decomposes into simpler components. This makes them well-suited for reasoning about behavioural distances and aligns with broader efforts to axiomatise automata-theoretic equivalences through a unified diagrammatic framework.
0
0
cs.LO 2026-04-30

Stable logical profunctors are exactly those from MLL+MIX proof-nets

Full Definability in a Profunctorial Model

Stability plus totality and logical-family conditions give a complete syntactic characterisation in the profunctor model.

Figure from the paper full image
abstract click to expand
A semantic model enjoys full definability if every semantic element in the model is a denotation of some proof or program. Full definability indicates that the model captures programs and proofs in a highly detailed manner. This paper studies full definability in a model based on the (bi)category of profunctors on groupoids, which is a proof-relevant variant of the relational model. Despite the fact that a profunctor is far more complicated than a relation, we show that a rather straightforward application of the ideas for the relational model, together with the notion of stability in profunctors, provides a complete characterisation of definable profunctors. More precisely, all logical families of stable and total profunctors are definable by proof-nets of multiplicative linear logic with MIX. As a part of the full definability proof, we show that the stability serves as a correctness criterion, which we think is of independent interest.
0
0
cs.LO 2026-04-30

These lecture notes survey automata-theoretic

Runtime Verification: Monitoring, Knowledge, and Uncertainty (Lecture Notes)

Lecture notes on automata, temporal logic, and epistemic foundations of runtime verification for monitoring partially observable systems…

abstract click to expand
Runtime verification is a lightweight verification technique that complements model checking by analyzing system executions at runtime rather than exploring a complete system model in advance. It is particularly useful for partially observable or black-box systems, where uncertainty can only be resolved through observation. These lecture notes present automata-theoretic, temporal-logical, and epistemic foundations of runtime verification. They cover specification formalisms, diagnosis, opacity, and monitorability, and explain how offline analysis can be used to construct monitors that operate online on observed executions. The notes also discuss timed extensions and the additional algorithmic and semantic challenges that arise in the real-time setting.
0
0
cs.LO 2026-04-30

The RMDP threshold problem reduces from parity games

On the Complexity of Robust Markov Decision Processes and Bisimulation Metrics

A polynomial algorithm for deciding whether worst-case value exceeds a threshold would place parity games in P.

abstract click to expand
Robust Markov decision processes (RMDPs) extend standard Markov decision processes (MDPs) to account for uncertainty in the transition probabilities. RMDPs have an uncertainty set that defines a set of possible transition functions, each of which induces a standard MDP. The natural objective in an RMDP is to optimize the discounted cumulative reward under the worst-case transition function in the uncertainty set. We study the complexity of the associated threshold problem for RMDPs with polytopic uncertainty sets in halfspace representation. Previous results focused on approximating the optimum or restricted attention to specific subclasses of RMDPs, such as interval MDPs or $L_\infty$-RMDPs. Our contributions are threefold: (1) For (s,a)-rectangular RMDPs, we prove that robust policy evaluation is in P via robust linear programming, and that the threshold problem is in NP. As a corollary, robust policy iteration is a polynomial-time algorithm for these RMDPs when the discount factor is fixed. (2) For $s$-rectangular RMDPs, we show that the threshold problem is in PSPACE via the first-order theory of the reals. (3) We establish lower bounds by reducing both parity games and bisimulation metrics between MDP states to the RMDP threshold problem. A polynomial-time algorithm for the threshold problem would resolve the long-standing open question of whether parity games can be solved in polynomial time. The reduction from bisimulation metrics also yields a practical benefit: it allows us to apply robust policy iteration as a more efficient alternative to the standard fixed-point iteration, as our empirical evaluation demonstrates.
0
0
cs.LO 2026-04-30

Orchestrated modules boost SMT solving for prime-field polynomials

An Effective Orchestral Approach to Satisfiability Modulo Prime Fields

By coordinating completeness-efficiency trade-offs inside the theory solver, the method outperforms prior tools on zero-knowledge proof and

Figure from the paper full image
abstract click to expand
Zero-knowledge proofs (ZKPs) are an emerging technology that has become the solution to efficiently provide security and privacy along with the transparency requirement of blockchains. ZKPs are usually expressed by means of arithmetic circuits and, more generally, systems of polynomial equations in a large prime field (commonly ranging from 64-bit to 256-bit values). An increasing interest to apply formal verification techniques to ensure soundness and completeness properties of ZKP protocols has shown the need of developing powerful SMT solvers able to handle such constraint systems. In this paper we consider the problem of deciding the satisfiability of existentially quantified first-order formulas defined over polynomial equations on a prime field. We present a new DPLL($T$)-based approach in which the theory solver orchestrates several modules with different trade-offs between completeness and efficiency. We have implemented the proposed techniques in a prototype that already shows better results than existing state-of-the-art tools on both benchmarks from the domain of ZKP compiler correctness and new benchmarks coming from the verification of arithmetic circuits for ZKPs. \keywords{SMT \and Finite field \and Polynomials \and Zero-Knowledge Proofs.
0
0
cs.LO 2026-04-30

Observable progression builds belief DFA on-the-fly for LTLf synthesis

On-the-fly LTLf Synthesis under Partial Observability

It avoids constructing unneeded states upfront by quantifying hidden variables during formula progression.

Figure from the paper full image
abstract click to expand
LTLf synthesis under partial observability requires reasoning about unobservable environment variables, which is typically handled by constructing a belief-state DFA via subset construction that universally quantifies these variables. Existing approaches perform this construction as a separate step prior to game solving, often generating belief states that are unnecessary in practice. We propose an on-the-fly approach to LTLf synthesis under partial observability based on observable progression. Our method incrementally builds the belief-state DFA by progressing the specification with respect to observable variables only, universally quantifying unobservable variables on the fly. We prove the correctness of the construction and show that it naturally enables on-the-fly game solving, leading to a fully on-the-fly synthesis framework. Our implementation leverages DFAs represented using Multi-Terminal Binary Decision Diagrams: a compact representation that has proven highly effective for LTLf synthesis under full observability. Experimental results demonstrate that our approach significantly outperforms existing methods and further highlight the practical benefits of integrating on-the-fly game solving with belief-state construction.
0
0
cs.LO 2026-04-30

Templates turn common code patterns into induction lemmas

Templates in Rewriting Induction

Matching programming constructs to higher-order functions lets bounded rewriting induction prove more equivalences automatically.

Figure from the paper full image
abstract click to expand
Rewriting Induction (RI) is a formal system in term rewriting to establish program equivalence. The recently defined Bounded RI for higher-order Logically Constrained Term Rewriting Systems (LCSTRSs) yields a convenient proof system for analyzing real programming code. A practical challenge in RI is the automatic generation of induction hypotheses, called lemmas. While various lemma generation techniques exist for plain term rewriting, there are much fewer that consider the intricacies brought on by calculations or constraints. Taking advantage of recent developments in higher-order RI, we here present a new approach based on templates, which operates by recognising typical programming constructs as instances of higher-order functions. While templates have been used as a stand-alone method to justify the correctness of program transformations, we here consider their integration in Bounded RI to obtain a complementary lemma generation heuristic. This allows us to prove equivalences that were previously out of reach.
0
0
cs.LO 2026-04-30

BDDs compress ACAS-Xu tables while preserving every decision

Compressing ACAS-Xu Lookup Tables with Binary Decision Diagrams

The exact logic fits in far less memory, runs with predictable timing on embedded hardware, and supports direct Boolean safety checks.

Figure from the paper full image
abstract click to expand
The Airborne Collision Avoidance System Xu (ACAS-Xu) relies on large certified Look-Up Tables (LUTs) that encode the exact decision logic used in operation. Neural-network-based approximations have been proposed to reduce memory requirements, but they inherently introduce approximation errors and complicate formal verification. This paper presents a symbolic compression approach based on Binary Decision Diagrams (BDDs) that preserves the exact semantics of the ACAS-Xu LUTs. The resulting representation is canonical, deterministic, and fully equivalent to the original tables, enabling sound and exact reasoning over the complete decision logic. By expressing both the system behavior and domain-specific operational properties within a common Boolean framework, verification reduces to efficient BDD operations and emptiness checks, with precise counterexamples generated when properties are violated. We demonstrate that the proposed BDD-based representation significantly reduces memory usage, achieves predictable and low-latency execution, and can be deployed on embedded platforms. These results highlight BDDs as a compelling alternative for exact, verifiable, and embedded deployment of ACAS-Xu decision logic.
0
0
cs.LO 2026-04-30

Hesitant tree automata match first-order logic on infinite trees

Automaton-based Characterisations of First Order Logic over Infinite Trees

They capture PolPCTL and CTLsf, revealing FO is limited to safety or co-safety along branches

abstract click to expand
We study the expressive power of First-Order Logic (\FO) over (unordered) infinite trees, with the aim of identifying robust characterisations in terms of branching-time specification formalisms. While such correspondences are well understood in the linear-time setting, the branching-time case presents well-known structural challenges. To this end, we introduce two classes of hesitant tree automata and show that they capture precisely the expressive power of two branching-time temporal logics, namely \PolPCTL and \CTLsf, both of which have been previously shown to be equivalent to \FO over infinite trees. These results provide uniform automata-theoretic characterisations and yield a natural normal form for the latter in terms of a new fragment of \CTLs called \PolCTLs. As a consequence, we identify a fundamental limitation of \FO in this setting: along each branch, it can express only properties that are either safety or co-safety, thereby revealing a sharp expressive boundary for first-order definability over infinite trees.
0
0
cs.LO 2026-04-29

The paper defines a compositional semantics for quantum Bayesian networks that matches…

Quantum Bayesian Networks: Compositionality and Typing via Linear Logic

Quantum Bayesian networks receive a compositional semantics and linear-logic typing that reduces to standard Bayesian networks for…

abstract click to expand
Quantum Bayesian networks provide a mathematical formalism to describe causal relations, to analyse correlations, and to predict the probabilities of measurement outcomes, in systems involving both classical and quantum data. They generalize Pearl's Bayesian networks-prominent graphical models for classical probabilistic reasoning and inference. Our paper brings compositional principles and a typing discipline into this setting. A key feature of our compositional semantics is that when all causes are classical, it coincides with the standard factor-based semantics of Bayesian networks, while in the purely quantum case it reduces to tensor networks. We then propose a typed formalism based on linear logic proof-nets, where types ensure well-behaved composition of systems.
0
0
cs.LO 2026-04-29

New logic captures agents gaining and losing actions

I Would If I Could: Reasoning about Dynamics of Actions in Multi-Agent Systems

ATL-D and ATEL-D track dynamic grants, revokes, and resulting knowledge shifts in multi-agent strategic reasoning.

Figure from the paper full image
abstract click to expand
Autonomous agents acting in realistic Multi-Agent Systems (MAS) should be able to adapt during their execution. Standard strategic logics, such as Alternating-time Temporal Logic (ATL), model agents' state- or history-dependent behaviour. However, the dynamic treatment of agents' available actions and their knowledge of required actions is still rarely addressed. In this paper, we introduce ATL with Dynamic Actions (ATL-D), which models the process of granting and revoking actions, and its extension ATEL-D, which captures how such updates affect agents' knowledge. Beyond the conceptual contribution, we provide several technical results: we analyse the expressivity of our logic in relation to ATL, study its relation to normative systems, and provide complexity results for relevant computational problems.
0
0
cs.LO 2026-04-29

Lecture notes map verification of neural networks

Verification of Neural Networks (Lecture Notes)

Theoretical treatment of feed-forward, recurrent, and transformer models using specifications and algorithms

Figure from the paper full image
abstract click to expand
These lecture notes provide an introduction to the verification of neural networks from a theoretical perspective. We discuss feed-forward neural networks, recurrent neural networks, attention mechanisms, and transformers, together with specification languages and algorithmic verification techniques.
0
0
cs.LO 2026-04-29

Every positional ω-regular property fits inside LTL

Positional Properties in Temporal Logic

This supplies tractable synthesis for positional game specifications and shows no Boolean-closed class can contain prefix-independent ones.

abstract click to expand
We study positional properties in the context of game-based reactive synthesis. Our motivation stems from having a usable specification logic, for which tractable synthesis is guaranteed. We demonstrate that every $\omega$-regular positional property (with respect to state- or edge-labelled game graphs), is expressible in linear-time temporal logic. Additionally, we provide some necessary and sufficient conditions for when an $\omega$-regular property is positional, and identify well-behaved subclasses of $\omega$-regular positional properties. Using varieties of languages, we prove that no class of $\omega$-regular positional properties can simultaneously contain a prefix-independent property and be closed under Boolean operations. We conclude by discussing the implications on alternating-time temporal logic, where we isolate a few different fragments with tractable model checking, and compare the associated expressivity of such fragments.
0
0
cs.LO 2026-04-29

Partially finite entailment in DL S stays in 2-EXPTIME

Partially Finite Model Reasoning in Description Logics Extended Version

Infinite model surgery unifies finite and infinite query reasoning and reduces closed-predicate containment to the same task.

Figure from the paper full image
abstract click to expand
Aiming to harmonise finite and infinite model reasoning, we initiate the study of partially finite models, where the reasoning task comes with a formula that specifies a part of the model that must be finite. We focus on the problem of partially finite query entailment in description logics (DLs): given a knowledge base (KB), a query, and a distinguished concept, decide whether the query holds in all models of the KB that interpret the distinguished concept as a finite set. To break the ground, we work with the DL S, an extension of the basic DL ALC with transitive roles, which is one of the simplest cases where finite and infinite query entailment diverge. Generalising previous results on the finite and infinite cases, we show that also partially finite entailment of conjunctive queries is in 2-exptime for S. The solution involves sophisticated infinite model surgery and goes far beyond combining the arguments for the two special cases. As a direct application, we show how the problem of query containment in the presence of closed predicates can be solved by reduction to partially finite query entailment.
0
0
cs.LO 2026-04-29

Two orderings let conditional obligations withdraw on new facts

Defeasible Conditional Obligation in a Two-tiered Preference-based Semantics (Extended Version)

Ideality and normality rankings add nonmonotonic reasoning to preference semantics while linking to standard normative logics.

Figure from the paper full image
abstract click to expand
In response to a concern raised by Horty, this paper develops a two-tiered, preference-based semantic framework for modeling defeasible conditional obligations. The paper extends a Hansson-Lewis style preference semantics for dyadic deontic logic by incorporating a nonmonotonic reasoning mechanism that enables previously derived obligations to be withdrawn when new, potentially conflicting information comes in. The account is bi-preferential: two orderings--ideality and normality--on worlds are employed to address shortcomings in earlier approaches, with a separate ranking method for each. At the nonmonotonic layer, a number of postulates are considered, including antecedent strengthening, inclusion and no-drowning. A connection is established with so-called constrained input/output (I/O) logic--an existing standard for normative reasoning based on a different methodology.
0
0
cs.LO 2026-04-29

Atomic flows identify proofs in BV

Proof Identity and Categorical Models of BV

This definition strengthens BV-categories and proves they model the logic soundly.

Figure from the paper full image
abstract click to expand
BV-categories are a recent development that aims to give categorical semantics to proofs in the logic BV. However, due to the absence of a coherence theorem on one side and a well-defined notion of proof identity for BV on the other side, the precise relation between BV-categories and the logic BV is still not clear. To improve on this situation, we define in this paper a notion of proof identity for BV, based on the notion of atomic flows, which can be seen as a special form of string diagrams. Based on this notion of proof identity, we then strengthen the existing notion of BV-category and prove that it is sound with respect to the logic.
0

browse all of cs.LO → full archive · search · sub-categories