pith. machine review for the scientific record. sign in

arxiv: 2605.11897 · v2 · submitted 2026-05-12 · 💻 cs.LO

Recognition: no theorem link

Fast Computation of Conditional Probabilities in MDPs and Markov Chain Families

Authors on Pith no claims yet

Pith reviewed 2026-05-14 20:43 UTC · model grok-4.3

classification 💻 cs.LO
keywords conditional reachabilityMarkov decision processesMDPsMarkov chainsabstraction-refinementprobabilistic verificationnumerical stabilitythreshold problems
0
0 comments X

The pith

An alternative algorithm computes optimal conditional reachability probabilities in MDPs by sidestepping the creation of hard cyclic problems.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces a new method for calculating optimal conditional reachability probabilities in Markov decision processes. Conventional reductions turn the task into reachability on cyclic MDPs that prove difficult to solve in practice. The presented approach stays numerically stable, matches the speed of ordinary reachability queries, and solves threshold questions in linear time when the MDP is acyclic. It further embeds into an abstraction-refinement loop that handles families of millions of Markov chains together.

Core claim

The authors develop an alternative, practically efficient procedure that computes optimal conditional reachability probabilities directly while preserving numerical stability and optimality, avoiding the cyclic MDPs produced by prior reductions; the procedure decides threshold problems in linear time on acyclic MDPs, performs comparably to standard reachability, and supports an abstraction-refinement framework capable of analysing millions of Markov chains simultaneously.

What carries the argument

The alternative computation procedure that obtains optimal conditional probabilities without constructing or solving the cyclic MDP that arises from the standard reduction.

If this is right

  • Threshold questions are decided in linear time on acyclic MDPs.
  • Runtime remains comparable to ordinary reachability queries on general MDPs.
  • The procedure integrates directly into abstraction-refinement loops.
  • Large families containing millions of Markov chains become feasible to analyse together.
  • Benchmarks drawn from Bayesian networks, probabilistic programs, and runtime monitoring exhibit speed-ups of multiple orders of magnitude.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The linear-time guarantee on acyclic models could support incremental or online verification tasks where the state space grows gradually.
  • Embedding the method inside abstraction-refinement opens the possibility of analysing parametric or infinite families of chains without enumerating them explicitly.
  • Because the procedure avoids cyclic blow-up, it may extend to related conditional problems such as expected rewards or long-run averages under conditioning.

Load-bearing premise

The new procedure produces exactly the same optimal values as the standard reduction on every input.

What would settle it

An MDP instance on which the new method returns a conditional probability that differs from the exact optimum obtained by solving the reduced cyclic MDP to machine precision.

Figures

Figures reproduced from arXiv: 2605.11897 by Filip Mac\'ak, Luko van der Maas, Milan \v{C}e\v{s}ka, Sebastian Junges, Tim Quatmann.

Figure 1
Figure 1. Figure 1: Example MDPs M1 (left) and M′ 1 (right). MDP model checking algorithms [6]. The restart method outlined in [7] provides a polynomial time algorithm for computing Prmax M (♢G | ♢E) and thus for solving Problem 1. Roughly, the approach constructs a new MDP M′ from M, where transitions leading to states that do not reach the condition E are rewired to the initial state. The conditional reachability probabilit… view at source ↗
Figure 2
Figure 2. Figure 2: Example MDPs M2 (left), M2,⟳ (middle), M˜ 2 (right). We consider the property Prmax M2 (♢{s4} | ♢{s5, s6}) ≤ 1/2. The initial component is shown in blue, together with all their actions that are not exits. All unmarked transition probabilities are uniformly distributed over all transitions of an action. Step 1: Reward Characterization. The value V (λ) above is characterized using an expected total reward f… view at source ↗
Figure 2
Figure 2. Figure 2: The two exists become the actions of the state [PITH_FULL_IMAGE:figures/full_fig_p010_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: (left) Colored MDP MC with underlying MDP from Fig.2. The coloring restricts the policy space to policies where σ(s2) = σ(s3) (s5 and s6 are irrelevant as only one action is available here). (right) three Markov chains in the family of MCs given by color consistent policies in MC . Example 8. Consider the colored MDP MC in [PITH_FULL_IMAGE:figures/full_fig_p014_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Runtime for solving Problem 1: restart vs treat. Exact arithmetic left, numerical computations right. Points refer to benchmark instances (from: BNs, MDPs, IMCs, RunMon). Points above the diagonal correspond to a speedup of treat, points above dashed lines to a 10x or 100x speedup. preprocessing, including the elimination of the initial component (Definition 2) All methods support exact and floating-point … view at source ↗
Figure 4
Figure 4. Figure 4: Runtime for solving Problem 1: restart vs treat. Exact arithmetic left, numerical computations right. Points refer to benchmark instances (from: BNs, MDPs, IMCs, RunMon). Points above the diagonal correspond to a speedup of treat, points above dashed lines to a 10x or 100x speedup. (refuel, patrol, evade). (3) We use MDPs from (uncertain/parametric) Bayesian networks [30], which correspond to the MDP seman… view at source ↗
Figure 5
Figure 5. Figure 5: Runtime for solving Problem 2: restart vs pt-bis-std, analogous to [PITH_FULL_IMAGE:figures/full_fig_p017_5.png] view at source ↗
Figure 5
Figure 5. Figure 5: Runtime for solving Problem 2: restart vs pt-bis-std, analogous to [PITH_FULL_IMAGE:figures/full_fig_p018_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Runtime for solving Problem 2. Left: Exact arithmetic [PITH_FULL_IMAGE:figures/full_fig_p018_6.png] view at source ↗
Figure 6
Figure 6. Figure 6: Runtime for solving Problem 2. Left: Exact arithmetic [PITH_FULL_IMAGE:figures/full_fig_p019_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: pt-bis-std vs bis-std: exact (left), numerical (middle), ε-exact (right). 100 102 TO Er pt-bis-std (s) 100 102 TO Er bis-adv (s) 100 102 TO Er pt-bis-std (s) 100 102 TO Er bis-adv (s) 100 102 TO Er pt-bis-std (s) 100 102 TO Er bis-adv (s) [PITH_FULL_IMAGE:figures/full_fig_p025_7.png] view at source ↗
Figure 7
Figure 7. Figure 7: pt-bis-std vs bis-std: exact (left), numerical (middle), ε-exact (right). 100 102 TO Er pt-bis-std (s) 100 102 TO Er bis-adv (s) 100 102 TO Er pt-bis-std (s) 100 102 TO Er bis-adv (s) 100 102 TO Er pt-bis-std (s) 100 102 TO Er bis-adv (s) [PITH_FULL_IMAGE:figures/full_fig_p026_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: pt-bis-std vs bis-adv: exact (left), numerical (middle), ε-exact (right). 100 102 TO Er pt-bis-std (s) 100 102 TO Er pt-bis-adv (s) 100 102 TO Er pt-bis-std (s) 100 102 TO Er pt-bis-adv (s) 100 102 TO Er pt-bis-std (s) 100 102 TO Er pt-bis-adv (s) [PITH_FULL_IMAGE:figures/full_fig_p025_8.png] view at source ↗
Figure 8
Figure 8. Figure 8: pt-bis-std vs bis-adv: exact (left), numerical (middle), ε-exact (right). 100 102 TO Er pt-bis-std (s) 100 102 TO Er pt-bis-adv (s) 100 102 TO Er pt-bis-std (s) 100 102 TO Er pt-bis-adv (s) 100 102 TO Er pt-bis-std (s) 100 102 TO Er pt-bis-adv (s) [PITH_FULL_IMAGE:figures/full_fig_p026_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: pt-bis-std vs pt-bis-adv: exact (left), numerical (middle), ε-exact (right). 100 102 TO Er pt-bis-std (s) 100 102 TO Er restart (s) 100 102 TO Er pt-bis-std (s) 100 102 TO Er restart (s) 100 102 TO Er pt-bis-std (s) 100 102 TO Er restart (s) [PITH_FULL_IMAGE:figures/full_fig_p025_9.png] view at source ↗
Figure 9
Figure 9. Figure 9: pt-bis-std vs pt-bis-adv: exact (left), numerical (middle), ε-exact (right). 100 102 TO Er pt-bis-std (s) 100 102 TO Er restart (s) 100 102 TO Er pt-bis-std (s) 100 102 TO Er restart (s) 100 102 TO Er pt-bis-std (s) 100 102 TO Er restart (s) [PITH_FULL_IMAGE:figures/full_fig_p026_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: pt-bis-std vs restart: exact (left), numerical (middle), ε-exact (right) [PITH_FULL_IMAGE:figures/full_fig_p025_10.png] view at source ↗
Figure 10
Figure 10. Figure 10: pt-bis-std vs restart: exact (left), numerical (middle), ε-exact (right) [PITH_FULL_IMAGE:figures/full_fig_p026_10.png] view at source ↗
read the original abstract

Computing optimal conditional reachability probabilities in Markov decision processes (MDPs) is tractable by a reduction to reachability probabilities. Yet, this reduction yields cyclic, challenging MDPs that are often notoriously hard to solve. We present an alternative, practically efficient method to compute optimal conditional reachabilities. This new method is numerically stable, can decide the threshold problem in linear time on acyclic MDPs, and yields performance comparable to standard reachability queries. We also integrate the method in an abstraction-refinement framework to analyse millions of Markov chains at once. We demonstrate the efficacy of the new methods on benchmarks from Bayesian network analysis, probabilistic programs, and runtime monitoring and show speed-ups up to multiple orders of magnitude.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript presents an alternative algorithm for computing optimal conditional reachability probabilities in Markov decision processes (MDPs) and families of Markov chains. It claims this method avoids the cyclic MDPs arising from the standard reduction to reachability, is numerically stable, decides threshold problems in linear time on acyclic MDPs, achieves performance comparable to standard reachability queries, and integrates into an abstraction-refinement loop to analyze millions of chains simultaneously. Empirical results on benchmarks from Bayesian networks, probabilistic programs, and runtime monitoring report speed-ups of multiple orders of magnitude.

Significance. If the equivalence to optimal conditional probabilities is rigorously established without hidden approximations or policy restrictions, the work would provide a practically significant advance in probabilistic verification. The linear-time threshold decision on acyclic MDPs and the scalable abstraction-refinement integration are particularly valuable for large-scale applications in monitoring and analysis of model families.

major comments (2)
  1. [Main algorithmic section (likely §4 or §5)] The central equivalence claim—that the new procedure computes exactly the same optimal conditional reachability values as the standard reduction to reachability, without implicit restrictions on the policy space or post-hoc normalizations—requires a detailed formal argument. This is load-bearing for the linear-time result on acyclic MDPs and the reported speed-ups; the abstract asserts it but the provided text does not contain an explicit proof or reduction showing preservation of optimality.
  2. [Numerical stability discussion] Numerical stability is asserted but not demonstrated via concrete analysis (e.g., condition numbers or comparison to exact arithmetic on small instances). This must be addressed to support the claim that the method avoids the hardness of cyclic reductions without introducing instability.
minor comments (2)
  1. [Preliminaries] Clarify the precise definition of 'conditional reachability' early in the preliminaries to avoid ambiguity with standard reachability.
  2. [Experimental evaluation] The benchmark tables would benefit from explicit columns reporting both the new method's values and a reference exact method on a subset of instances to visually confirm equivalence.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments and for recognizing the potential practical impact of our algorithm for conditional reachability in MDPs and Markov chain families. We address each major comment below and will revise the manuscript accordingly to strengthen the formal and empirical support.

read point-by-point responses
  1. Referee: [Main algorithmic section (likely §4 or §5)] The central equivalence claim—that the new procedure computes exactly the same optimal conditional reachability values as the standard reduction to reachability, without implicit restrictions on the policy space or post-hoc normalizations—requires a detailed formal argument. This is load-bearing for the linear-time result on acyclic MDPs and the reported speed-ups; the abstract asserts it but the provided text does not contain an explicit proof or reduction showing preservation of optimality.

    Authors: We agree that an explicit formal argument is necessary. The manuscript establishes equivalence by deriving the same optimality equations as the standard reduction (via the conditional probability fixed-point characterization) but solving them directly on the original MDP without introducing auxiliary states or cycles. The policy space is unrestricted, as the algorithm considers all schedulers in the original model, and no post-hoc normalization is performed. To address the concern, we will add a dedicated theorem and complete proof in the revised manuscript (new subsection in §4) that formally reduces our procedure to the standard LP formulation, thereby rigorously supporting both the optimality claim and the linear-time threshold decision on acyclic instances. revision: yes

  2. Referee: [Numerical stability discussion] Numerical stability is asserted but not demonstrated via concrete analysis (e.g., condition numbers or comparison to exact arithmetic on small instances). This must be addressed to support the claim that the method avoids the hardness of cyclic reductions without introducing instability.

    Authors: We acknowledge that the current discussion of numerical stability is primarily qualitative. In the revised manuscript we will add a new subsection providing concrete support: (i) derivation of condition-number bounds for the linear systems arising in our direct formulation (which avoid the ill-conditioned cycles of the standard reduction), and (ii) empirical validation on small benchmark instances using exact rational arithmetic, showing that floating-point results match the exact values to machine precision. These additions will substantiate that the method remains stable while sidestepping the hardness of cyclic MDPs. revision: yes

Circularity Check

0 steps flagged

No circularity detected; derivation is self-contained algorithmic alternative

full rationale

The paper introduces an alternative method for optimal conditional reachability in MDPs, presented as numerically stable and linearly solvable on acyclic cases without relying on the cyclic reduction. No equations, self-citations, or reductions in the abstract or described claims equate the new procedure to its inputs by construction, fitted parameters renamed as predictions, or load-bearing self-referential definitions. The central claims rest on independent algorithmic design that can be verified externally against standard reachability benchmarks, yielding a self-contained derivation with no detectable circular steps.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

The abstract relies on standard MDP definitions and reachability theory without introducing new free parameters, axioms, or invented entities.

pith-pipeline@v0.9.0 · 5431 in / 1065 out tokens · 33560 ms · 2026-05-14T20:43:18.508539+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

34 extracted references · 34 canonical work pages

  1. [1]

    de Alfaro, L.: Formal verification of probabilistic systems. Ph.D. thesis, Stanford University, USA (1997)

  2. [2]

    In: TACAS

    Andrés, M.E., van Rossum, P.: Conditional probabilities over probabilistic and nondeterministic systems. In: TACAS. pp. 157–172. Lecture Notes in Computer Science, Springer (2008).https://doi.org/10.1007/978-3-540-78800-3_12

  3. [3]

    Andriushchenko, R., Ceska, M., Macák, F., Junges, S., Katoen, J.: An oracle-guided approach to constrained policy synthesis under uncertainty. J. Artif. Intell. Res.82, 433–469 (2025).https://doi.org/10.1613/JAIR.1.16593

  4. [4]

    Badings, T., Sim~ao, T.D., Suilen, M., Jansen, N.: Decision-making under uncer- tainty: beyond probabilities. Int. J. Softw. Tools Technol. Transf.25(3), 375–391 (2023).https://doi.org/10.1007/S10009-023-00704-3

  5. [5]

    In: TACAS (2)

    Badings,T.,Volk,M.,Junges,S.,Stoelinga,M.,Jansen,N.:CTMCswithimprecisely timed observations. In: TACAS (2). pp. 258–278. Lecture Notes in Computer Science, Springer (2024).https://doi.org/10.1007/978-3-031-57249-4_13

  6. [6]

    MIT Press (2008).https: //doi.org/10.1007/978-3-031-97439-7_7

    Baier, C., Katoen, J.: Principles of model checking. MIT Press (2008).https: //doi.org/10.1007/978-3-031-97439-7_7

  7. [7]

    In: TACAS

    Baier, C., Klein, J., Klüppelholz, S., Märcker, S.: Computing conditional probabili- ties in markovian models efficiently. In: TACAS. pp. 515–530. Lecture Notes in Com- puter Science, Springer (2014).https://doi.org/10.1007/978-3-642-54862-8_ 43

  8. [8]

    In: TACAS (2)

    Baier, C., Klein, J., Klüppelholz, S., Wunderlich, S.: Maximizing the conditional expected reward for reaching the goal. In: TACAS (2). pp. 269–285. Lecture Notes in Computer Science (2017).https://doi.org/10.1007/978-3-662-54580-5_16

  9. [9]

    Specification-

    Bartocci, E., Deshmukh, J.V., Donzé, A., Fainekos, G., Maler, O., Nickovic, D., Sankaranarayanan, S.: Specification-based monitoring of cyber-physical systems: A survey on theory, tools and applications. In: Lectures on Runtime Verification, pp. 135–175. Lecture Notes in Computer Science, Springer (2018).https://doi.org/ 10.1007/978-3-319-75632-5_5

  10. [10]

    In: TACAS (2)

    Batz, K., Chen, M., Junges, S., Kaminski, B.L., Katoen, J., Matheja, C.: Prob- abilistic program verification via inductive synthesis of inductive invariants. In: TACAS (2). pp. 410–429. Lecture Notes in Computer Science, Springer (2023). https://doi.org/10.1007/978-3-031-30820-8_25

  11. [11]

    fast computation of conditional probabilities in MDPs and Markov chain families

    Ceska, M., Junges, S., van der Maas, L., Macák, F., Quatmann, T.: Artifact of "fast computation of conditional probabilities in MDPs and Markov chain families" (May 2026).https://doi.org/10.5281/zenodo.19739061

  12. [12]

    In: 25 Years of Model Checking

    Chatterjee, K., Henzinger, T.A.: Value iteration. In: 25 Years of Model Checking. pp. 107–138. Lecture Notes in Computer Science, Springer (2008).https://doi. org/10.1007/978-3-540-69850-0_7

  13. [13]

    Cho, M., Gouwar, J., Holtzen, S.: Scaling optimization over uncertainty via com- pilation. Proc. ACM Program. Lang.9(OOPSLA1), 1546–1574 (2025).https: //doi.org/10.1145/3720500

  14. [14]

    Etessami, K., Kwiatkowska, M.Z., Vardi, M.Y., Yannakakis, M.: Multi-objective model checking of markov decision processes. Log. Methods Comput. Sci.4(4) (2008).https://doi.org/10.2168/LMCS-4(4:8)2008

  15. [15]

    In: TACAS

    Forejt, V., Kwiatkowska, M.Z., Norman, G., Parker, D., Qu, H.: Quantitative multi-objective verification for probabilistic systems. In: TACAS. pp. 112–127. Lecture Notes in Computer Science, Springer (2011).https://doi.org/10.1007/ 978-3-642-19835-9_11 22 Češka et al

  16. [16]

    In: CAV (1)

    Gerlach, L., Winkler, T., Ábrahám, E., Bonakdarpour, B., Junges, S.: Efficient probabilistic model checking for relational reachability. In: CAV (1). pp. 127–147. Lecture Notes in Computer Science, Springer (2025).https://doi.org/10.1007/ 978-3-031-98668-0_6

  17. [17]

    Addison-Wesley (1994)

    Graham, R.L., Knuth, D.E., Patashnik, O.: Concrete Mathematics: A Foundation for Computer Science, 2nd Ed. Addison-Wesley (1994)

  18. [18]

    Gürtler, T., Kaminski, B.L.: noDice: Inference for discrete probabilistic programs with nondeterminism and conditioning. Proc. ACM Program. Lang.10(OOPSLA1) (Apr 2026).https://doi.org/10.1145/3798215

  19. [19]

    Haddad, S., Monmege, B.: Interval iteration algorithm for MDPs and IMDPs. Theor. Comput. Sci.735, 111–131 (2018).https://doi.org/10.1016/J.TCS.2016. 12.003

  20. [20]

    Hartmanns, A., Junges, S., Quatmann, T., Weininger, M.: A practitioner’s guide to MDPmodelcheckingalgorithms.In:TACAS(1).pp.469–488.LectureNotesinCom- puter Science, Springer (2023).https://doi.org/10.1007/978-3-031-30823-9_ 24

  21. [21]

    In: AAAI

    Heck, L., Macák, F., Ceska, M., Junges, S.: Constrained and robust policy synthesis with satisfiability-modulo-probabilistic-model-checking. In: AAAI. pp. 36253–36261. AAAI Press (2026).https://doi.org/10.1609/AAAI.V40I43.40944

  22. [22]

    Hensel, C., Junges, S., Katoen, J., Quatmann, T., Volk, M.: The probabilistic model checker storm. Int. J. Softw. Tools Technol. Transf.24(4), 589–610 (2022). https://doi.org/10.1007/S10009-021-00633-Z

  23. [23]

    Holtzen, S., den Broeck, G.V., Millstein, T.D.: Scaling exact inference for discrete probabilistic programs. Proc. ACM Program. Lang.4(OOPSLA), 140:1–140:31 (2020).https://doi.org/10.1145/3428208

  24. [24]

    In: CAV (2)

    Junges, S., Torfah, H., Seshia, S.A.: Runtime monitors for markov decision processes. In: CAV (2). pp. 553–576. Lecture Notes in Computer Science, Springer (2021). https://doi.org/10.1007/978-3-030-81688-9_26

  25. [25]

    In: ATVA

    van der Maas, L., Junges, S.: Learning verified monitors for hidden markov models. In: ATVA. pp. 180–203. Lecture Notes in Computer Science, Springer (2025). https://doi.org/10.1007/978-3-032-08707-2_9

  26. [26]

    In: SEFM

    Märcker, S., Baier, C., Klein, J., Klüppelholz, S.: Computing conditional probabili- ties: Implementation and evaluation. In: SEFM. pp. 349–366. Lecture Notes in Com- puter Science, Springer (2017).https://doi.org/10.1007/978-3-319-66197-1_ 22

  27. [27]

    Formal Methods Syst

    Mathur, U., Bauer, M.S., Chadha, R., Sistla, A.P., Viswanathan, M.: Exact quanti- tative probabilistic model checking through rational search. Formal Methods Syst. Des.56(1), 90–126 (2020).https://doi.org/10.1007/S10703-020-00348-Y

  28. [28]

    In: HLDVT

    Norman, G., Parker, D., Kwiatkowska, M.Z., Shukla, S.K., Gupta, R.K.: Formal analysis and validation of continuous-time markov chain based system level power management strategies. In: HLDVT. pp. 45–50. IEEE Computer Society (2002). https://doi.org/10.1109/HLDVT.2002.1224427

  29. [29]

    Wiley Series in Probability and Statistics, Wiley (1994)

    Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Pro- gramming. Wiley Series in Probability and Statistics, Wiley (1994). https: //doi.org/10.1002/9780470316887

  30. [30]

    In: TACAS (1)

    Quatmann, T., Katoen, J.: Multi-objective optimization of long-run average and total rewards. In: TACAS (1). pp. 230–249. Lecture Notes in Computer Science, Springer (2021).https://doi.org/10.1007/978-3-030-72016-2_13

  31. [31]

    In: QEST

    Salmani, B., Katoen, J.: Bayesian inference by symbolic model checking. In: QEST. pp. 115–133. Lecture Notes in Computer Science, Springer (2020).https://doi. org/10.1007/978-3-030-59854-9_9 Fast Computation of Conditional Probabilities 23

  32. [32]

    Salmani, B., Katoen, J.: Automatically finding the right probabilities in bayesian networks. J. Artif. Intell. Res.77, 1637–1696 (2023).https://doi.org/10.1613/ JAIR.1.14044

  33. [33]

    In: AAMAS

    Skurka, A., van der Maas, L., Junges, S., Torfah, H.: Learning robust markov models for safe runtime monitoring. In: AAMAS. International Foundation for Autonomous Agents and Multiagent Systems / ACM (2026)

  34. [34]

    Stoller, S.D., Bartocci, E., Seyster, J., Grosu, R., Havelund, K., Smolka, S.A., Zadok, E.: Runtime verification with state estimation. In: RV. pp. 193–207. Lec- ture Notes in Computer Science, Springer (2011). https://doi.org/10.1007/ 978-3-642-29860-8_15 24 Češka et al. A Proofs A.1 Proofs for Section 3 Proof for Theorem 1Forσ∈Σ M,E we get Prσ M(♢G|♢E)∼...