Recognition: no theorem link
Quantifying Sensitivity for Tree Ensembles: A symbolic and compositional approach
Pith reviewed 2026-05-14 17:45 UTC · model grok-4.3
The pith
Decision tree ensembles can have their sensitivity to small input changes quantified by discretizing the space and counting susceptible regions via algebraic decision diagrams.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
By representing the discretized sensitivity query as an algebraic decision diagram and splitting the diagram into subproblems that can be solved independently, the count of regions susceptible to misclassification can be obtained efficiently together with explicit error and confidence bounds, outperforming standard model counters on ensembles of increasing size.
What carries the argument
Algebraic decision diagram encoding of the discretized sensitivity function, decomposed into subproblems for compositional evaluation.
If this is right
- Larger ensembles become feasible to analyze for sensitivity than with direct model counting.
- The certified bounds allow safe use of the sensitivity number in downstream verification tasks.
- Compositional decomposition supports parallel execution for further scaling.
- Sensitivity can be compared across different ensemble sizes and depths under the same certified regime.
Where Pith is reading between the lines
- The same ADD decomposition pattern could be reused for other ensemble properties such as robustness margins.
- Adaptive choice of discretization granularity based on tree depth might tighten the error bounds further.
- The method opens a route to sensitivity-guided retraining loops inside automated machine-learning pipelines.
Load-bearing premise
The discretization of the input space together with the algebraic decision diagram encoding captures every sensitivity region without missing or adding errors that would invalidate the certified bounds.
What would settle it
Manually enumerate all sensitive regions on a small decision tree ensemble with known discretization and check whether the computed count lies inside the reported error interval.
Figures
read the original abstract
Decision tree ensembles (DTE) are a popular model for a wide range of AI classification tasks, used in multiple safety critical domains, and hence verifying properties on these models has been an active topic of study over the last decade. One such verification question is the problem of sensitivity, which asks, given a DTE, whether a small change in subset of features can lead to misclassification of the input. In this work, our focus is to build a quantitative notion of sensitivity, tailored to DTEs, by discretizing the input space of the model and enumerating the regions which are susceptible to sensitivity. We propose a novel algorithmic technique that can perform this computation efficiently, within a certified error and confidence bound. Our approach is based on encoding the problem as an algebraic decision diagram (ADD), and further splitting it into subproblems that can be solved efficiently and make the computation compositional and scalable. We evaluate the performance of our technique over benchmarks of varying size in terms of number of trees and depth, comparing it against the performance of model counters over the same problem encoding. Experimental results show that our tool XCount achieves significant speedup over other approaches and can scale well with the increasing sizes of the ensembles.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper claims a novel symbolic method to quantify sensitivity of decision tree ensembles: discretize the input space, encode the resulting sensitivity regions as an algebraic decision diagram (ADD), split the ADD into subproblems that can be solved compositionally, and thereby obtain a sensitivity count together with certified error and confidence bounds. The approach is evaluated on benchmarks of varying tree count and depth, where the tool XCount is reported to achieve substantial speedups over direct model-counting encodings of the same problem.
Significance. If the certified bounds are shown to be sound with respect to the continuous input space, the work supplies a practical, scalable technique for quantitative sensitivity analysis of tree ensembles that are already deployed in safety-critical domains. The compositional ADD encoding and the reported performance gains over off-the-shelf model counters constitute a concrete engineering advance that could be adopted by existing verification tool-chains.
major comments (2)
- [§3.2] §3.2 (Discretization and ADD construction): the certified error and confidence bounds are stated to hold for the continuous sensitivity measure, yet the manuscript does not supply an explicit argument that every threshold-induced decision boundary in the ensemble is either aligned with or conservatively over-approximated by the chosen discretization grid. Without such a guarantee, a perturbation that crosses a split between two grid cells can be misclassified, rendering the reported bounds unsound for the original continuous problem.
- [§4] §4 (Compositional splitting): the complexity analysis of the ADD splitting procedure is only sketched; a precise bound on the number and size of subproblems generated as a function of ensemble depth and number of trees is required to substantiate the scalability claims made in the abstract and §5.
minor comments (2)
- [§2] Notation for the sensitivity measure (e.g., the precise definition of the quantitative count versus the indicator) should be introduced once and used consistently; several passages in §2 and §3 reuse the same symbol for both the Boolean and the numeric versions.
- [§5] Table 1 and Figure 3 would benefit from explicit column/axis labels that include the units of the reported runtimes and the exact grid resolution used in each experiment.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback. We address the two major comments below and will revise the manuscript accordingly to strengthen the presentation.
read point-by-point responses
-
Referee: [§3.2] §3.2 (Discretization and ADD construction): the certified error and confidence bounds are stated to hold for the continuous sensitivity measure, yet the manuscript does not supply an explicit argument that every threshold-induced decision boundary in the ensemble is either aligned with or conservatively over-approximated by the chosen discretization grid. Without such a guarantee, a perturbation that crosses a split between two grid cells can be misclassified, rendering the reported bounds unsound for the original continuous problem.
Authors: We agree that an explicit soundness argument linking the discretization grid to the continuous decision boundaries is required. In the revised version we will insert a new lemma in §3.2 that formally proves the grid either aligns with every split threshold or produces a conservative over-approximation, thereby preserving the certified error and confidence bounds for the original continuous sensitivity measure. The proof will reference the finite set of thresholds extracted from the ensemble and show that any crossing perturbation is captured by the enclosing grid cell. revision: yes
-
Referee: [§4] §4 (Compositional splitting): the complexity analysis of the ADD splitting procedure is only sketched; a precise bound on the number and size of subproblems generated as a function of ensemble depth and number of trees is required to substantiate the scalability claims made in the abstract and §5.
Authors: We accept that the current sketch in §4 is insufficient. The revised manuscript will contain a precise complexity statement: the splitting procedure generates at most O(t · d) subproblems, where t is the number of trees and d the maximum depth, with each subproblem of size polynomial in the number of features and the discretization resolution. This bound will be derived from the recurrence relation that governs the compositional decomposition and will be accompanied by a short proof sketch. revision: yes
Circularity Check
No circularity in derivation; independent algorithmic encoding
full rationale
The paper defines a quantitative sensitivity measure for decision tree ensembles by discretizing the input space and encoding regions as an algebraic decision diagram (ADD), then solving the resulting subproblems compositionally. This construction is presented as a new algorithmic procedure whose output is compared directly against external model counters on the identical encoding. No step reduces the claimed sensitivity count or certified bounds to a fitted parameter, a self-referential definition, or a load-bearing self-citation; the central claim remains an independent computational method whose soundness is asserted relative to the discretization and ADD representation rather than derived from prior results by the same authors.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Decision tree ensembles can be faithfully encoded as algebraic decision diagrams for sensitivity analysis
Reference graph
Works this paper leans on
-
[1]
Ahmad, A., Tayal, T.V., Gupta, A., Akshay, S.: Sensitivity verification for additive decision tree ensembles. In: The Thirteenth International Conference on Learning Representations,ICLR2025,Singapore,April24-28,2025.OpenReview.net(2025), https://openreview.net/forum?id=h0vC0fm1q7
work page 2025
-
[2]
Athavale, A., Bartocci, E., Christakis, M., Maffei, M., Nickovic, D., Weis- senbacher, G.: Verifying global two-safety properties in neural networks with confidence. In: Gurfinkel, A., Ganesh, V. (eds.) Computer Aided Verification - 36th International Conference, CAV 2024, Montreal, QC, Canada, July 24- 27, 2024, Proceedings, Part II. Lecture Notes in Com...
-
[3]
Formal methods in system design10(2), 171–206 (1997)
Bahar, R.I., Frohm, E.A., Gaona, C.M., Hachtel, G.D., Macii, E., Pardo, A., Somenzi, F.: Algebraic decision diagrams and their applications. Formal methods in system design10(2), 171–206 (1997)
work page 1997
-
[4]
Biswas, S., Rajan, H.: Fairify: Fairness verification of neural net- works. In: 45th IEEE/ACM International Conference on Software En- gineering, ICSE 2023, Melbourne, Australia, May 14-20, 2023. pp. 1546–1558. IEEE (2023). https://doi.org/10.1109/ICSE48619.2023.00134, https://doi.org/10.1109/ICSE48619.2023.00134
-
[5]
Frontiers in Ar- tificial Intelligence6(2023)
Blockeel, H., Devos, L., Frénay, B., Nanfack, G., Nijssen, S.: Deci- sion trees: from efficient prediction to responsible ai. Frontiers in Ar- tificial Intelligence6(2023). https://doi.org/10.3389/frai.2023.1124553, https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2023.1124553
-
[6]
Machine learning45(1), 5–32 (2001)
Breiman, L.: Random forests. Machine learning45(1), 5–32 (2001)
work page 2001
-
[7]
IEEE Transactions on Computers100(8), 677–691 (1986)
Bryant, R.E.: Graph-based algorithms for boolean function manipulation. IEEE Transactions on Computers100(8), 677–691 (1986)
work page 1986
-
[8]
2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) pp
Calzavara, S., Cazzaro, L., Lucchese, C., Marcuzzi, F.: Explainable global fairness verification of tree-based classifiers. 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) pp. 1–17 (2022), https://api.semanticscholar.org/CorpusID:252545325
work page 2023
-
[9]
Applied Soft Computing73, 914– 920 (2018)
Chang, Y.C., Chang, K.H., Wu, G.J.: Application of extreme gra- dient boosting trees in the construction of credit risk assessment models for financial institutions. Applied Soft Computing73, 914– 920 (2018). https://doi.org/https://doi.org/10.1016/j.asoc.2018.09.029, https://www.sciencedirect.com/science/article/pii/S1568494618305465
-
[10]
In: Chaudhuri, K., Salakhutdinov, R
Chen, H., Zhang, H., Boning, D., Hsieh, C.J.: Robust decision trees against adversarial examples. In: Chaudhuri, K., Salakhutdinov, R. (eds.) Proceedings of the 36th International Conference on Machine Learning. Proceedings of Ma- chine Learning Research, vol. 97, pp. 1122–1131. PMLR (09–15 Jun 2019), https://proceedings.mlr.press/v97/chen19m.html
work page 2019
-
[11]
In: Wallach, H.M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E.B., Garnett, R
Chen, H., Zhang, H., Si, S., Li, Y., Boning, D.S., Hsieh, C.: Robust- ness verification of tree-based models. In: Wallach, H.M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E.B., Garnett, R. (eds.) Ad- vances in Neural Information Processing Systems 32: Annual Confer- ence on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-...
work page 2019
-
[12]
Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 785–794. KDD ’16, ACM, New York, NY, USA (2016). https://doi.org/10.1145/2939672.2939785, http://doi.acm.org/10.1145/2939672.2939785
-
[13]
de Colnet, A., Szeider, S., Zhang, T.: Compilation and fast model counting beyond CNF. In: Proceedings of the Thirty-Third International Joint Conference on Artifi- cial Intelligence, IJCAI 2024, Jeju, South Korea, August 3-9, 2024. pp. 3315–3323. ijcai.org (2024), https://www.ijcai.org/proceedings/2024/367
work page 2024
-
[14]
https://xgboost.readthedocs.io, accessed: 2026
Developers, X.: Xgboost documentation. https://xgboost.readthedocs.io, accessed: 2026
work page 2026
-
[15]
Proceedings of the AAAI Conference on Artificial Intelligence 38(19), 21019–21028 (Mar 2024)
Devos, L., Cascioli, L., Davis, J.: Robustness verification of multi-class tree ensembles. Proceedings of the AAAI Conference on Artificial Intelligence 38(19), 21019–21028 (Mar 2024). https://doi.org/10.1609/aaai.v38i19.30093, https://ojs.aaai.org/index.php/AAAI/article/view/30093
-
[16]
Devos, L., Meert, W., Davis, J.: Versatile verification of tree ensembles. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 2654–2664. PMLR (18–24 Jul 2021), https://proceedings.mlr.press/v139/devos21a.html
work page 2021
-
[17]
In: Ogata, K., Méry, D., Sun, M., Liu, S
Downing, M., Eiers, W., DeLong, E., Lodha, A., Burns, B.O., Kadron, I.B., Bultan, T.: Quantitative symbolic robustness verification for quantized neural networks. In: Ogata, K., Méry, D., Sun, M., Liu, S. (eds.) Formal Methods and Software Engineering - 25th International Conference on Formal Engineer- ing Methods, ICFEM 2024, Hiroshima, Japan, December 2...
-
[18]
Dudek, J.M., Phan, V., Vardi, M.Y.: ADDMC: weighted model count- ing with algebraic decision diagrams. In: The Thirty-Fourth AAAI Con- ference on Artificial Intelligence, AAAI 2020, The Thirty-Second Inno- vative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelli- gence, EA...
-
[19]
In: Proceedings of the 3rd innovations in theoretical computer science con- ference
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through aware- ness. In: Proceedings of the 3rd innovations in theoretical computer science con- ference. pp. 214–226 (2012)
work page 2012
-
[20]
In: Proceedings of the 2017 11th Joint Meeting on Foundations of Soft- ware Engineering
Galhotra, S., Brun, Y., Meliou, A.: Fairness testing: testing software for discrim- ination. In: Proceedings of the 2017 11th Joint Meeting on Foundations of Soft- ware Engineering. p. 498–510. ESEC/FSE 2017, Association for Computing Ma- chinery, New York, NY, USA (2017). https://doi.org/10.1145/3106237.3106277, https://doi.org/10.1145/3106237.3106277
-
[21]
Computers in Biology and Medicine128, 104089 (2021)
Ghiasi, M.M., Zendehboudi, S.: Application of decision tree- based ensemble learning in the classification of breast can- cer. Computers in Biology and Medicine128, 104089 (2021). https://doi.org/https://doi.org/10.1016/j.compbiomed.2020.104089, https://www.sciencedirect.com/science/article/pii/S0010482520304200
-
[22]
Ghosh, B., Basu, D., Meel, K.S.: Algorithmic fairness verification with graphi- cal models. In: Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI Quantifying Sensitivity for Tree Ensembles 23 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelli- gence, IAAI 2022, The Twelveth Symposium on Educational Advances in Ar- ...
-
[23]
In: Antonie, L., Pei, J., Yu, X., Chierichetti, F., Lauw, H.W., Sun, Y., Parthasarathy, S
Gupta, A., Henzinger, T.A., Kueffner, K., Mallik, K., Pape, D.: Mon- itoring robustness and individual fairness. In: Antonie, L., Pei, J., Yu, X., Chierichetti, F., Lauw, H.W., Sun, Y., Parthasarathy, S. (eds.) Pro- ceedings of the 31st ACM SIGKDD Conference on Knowledge Discov- ery and Data Mining, V.2, KDD 2025, Toronto ON, Canada, August 3- 7, 2025. pp...
-
[24]
Ignatiev, A., Izza, Y., Stuckey, P.J., Marques-Silva, J.: Using maxsat for efficient explanations of tree ensembles. In: Thirty-Sixth AAAI Conference on Artificial In- telligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Ar- tificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence...
-
[25]
Izza, Y., Ignatiev, A., Rubin, S., Marques-Silva, J., Stuckey, P.J.: Most general explanations of tree ensembles. In: Proceedings of the Thirty-Fourth International Joint Conference on Artificial In- telligence, IJCAI 2025, Montreal, Canada, August 16-22, 2025. pp. 5463–5471. ijcai.org (2025). https://doi.org/10.24963/IJCAI.2025/608, https://doi.org/10.24...
-
[26]
John, P.G., Vijaykeerthy, D., Saha, D.: Verifying individual fairness in machine learning models. In: Adams, R.P., Gogate, V. (eds.) Proceedings of the Thirty- Sixth Conference on Uncertainty in Artificial Intelligence, UAI 2020, virtual online, August 3-6, 2020. Proceedings of Machine Learning Research, vol. 124, pp. 749–
work page 2020
-
[27]
AUAI Press (2020), http://proceedings.mlr.press/v124/george-john20a.html
work page 2020
-
[28]
In: Balcan, M., Weinberger, K.Q
Kantchelian,A.,Tygar,J.D.,Joseph,A.D.:Evasionandhardeningoftreeensemble classifiers. In: Balcan, M., Weinberger, K.Q. (eds.) Proceedings of the 33nd Inter- national Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016. JMLR Workshop and Conference Proceedings, vol. 48, pp. 2387–
work page 2016
-
[29]
JMLR.org (2016), http://proceedings.mlr.press/v48/kantchelian16.html
work page 2016
-
[30]
Advances in neural information processing systems30, 3146–3154 (2017)
Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., Liu, T.Y.: Lightgbm: A highly efficient gradient boosting decision tree. Advances in neural information processing systems30, 3146–3154 (2017)
work page 2017
-
[31]
Kim, B.H., Wang, J., Wang, C.: Fairquant: Certifying and quantifying fair- ness of deep neural networks. In: 47th IEEE/ACM International Conference on Software Engineering, ICSE 2025, Ottawa, ON, Canada, April 26 - May 6,
work page 2025
-
[32]
pp. 527–539. IEEE (2025). https://doi.org/10.1109/ICSE55347.2025.00016, https://doi.org/10.1109/ICSE55347.2025.00016
-
[33]
In: Chakraborty, S., Jiang, J.R
Lagniez, J., Marquis, P., Biere, A.: Dynamic blocked clause elimina- tion for projected model counting. In: Chakraborty, S., Jiang, J.R. (eds.) 27th International Conference on Theory and Applications of Satisfiability Testing, SAT 2024, Pune, India, August 21-24, 2024. LIPIcs, vol. 305, pp. 21:1–21:17. Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2...
-
[34]
IOP Conference Series: Materials Science and Engineering 1022(1), 012042 (jan 2021)
Madaan, M., Kumar, A., Keshri, C., Jain, R., Nagrath, P.: Loan de- fault prediction using decision trees and random forest: A compara- tive study. IOP Conference Series: Materials Science and Engineering 1022(1), 012042 (jan 2021). https://doi.org/10.1088/1757-899X/1022/1/012042, https://dx.doi.org/10.1088/1757-899X/1022/1/012042
-
[35]
In: Libkin, L., Pichler, R., Guagliardo, P
Meel, K.S., Vinodchandran, N.V., Chakraborty, S.: Estimating the size of union of sets in streaming models. In: Libkin, L., Pichler, R., Guagliardo, P. (eds.) PODS’21: Proceedings of the 40th ACM SIGMOD-SIGACT-SIGAI Sym- posium on Principles of Database Systems, Virtual Event, China, June 20- 25, 2021. pp. 126–137. ACM (2021). https://doi.org/10.1145/3452...
-
[36]
Ng, A.Y.: Feature selection, l1 vs. l2 regularization, and rotational in- variance. In: Proceedings of the Twenty-First International Conference on Machine Learning. p. 78. ICML ’04, Association for Computing Machin- ery, New York, NY, USA (2004). https://doi.org/10.1145/1015330.1015435, https://doi.org/10.1145/1015330.1015435
-
[37]
Environmental Modelling and Software174, 105971 (2024)
Niazkar, M., Menapace, A., Brentan, B., Piraei, R., Jimenez, D., Dhawan, P., Righetti, M.: Applications of xgboost in wa- ter resources engineering: A systematic literature review (dec 2018–may 2023). Environmental Modelling and Software174, 105971 (2024). https://doi.org/https://doi.org/10.1016/j.envsoft.2024.105971, https://www.sciencedirect.com/science...
-
[38]
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., et al.: Scikit-learn: Machine learninginpython.Journalofmachinelearningresearch12(Oct),2825–2830(2011)
work page 2011
-
[39]
Proceedings of the AAAI Conference on Artificial Intelli- gence34(04), 5478–5486 (Apr 2020)
Ranzato, F., Zanella, M.: Abstract interpretation of decision tree ensem- ble classifiers. Proceedings of the AAAI Conference on Artificial Intelli- gence34(04), 5478–5486 (Apr 2020). https://doi.org/10.1609/aaai.v34i04.5998, https://ojs.aaai.org/index.php/AAAI/article/view/5998
-
[40]
In: Proceedings of International Joint Conference on Artificial Intelligence (IJCAI) (2019)
Sharma, S., Roy, S., Soos, M., Meel, K.S.: Ganak: A scalable probabilistic ex- act model counter. In: Proceedings of International Joint Conference on Artificial Intelligence (IJCAI) (2019)
work page 2019
-
[41]
Procedia Computer Science 215, 652–661 (2022)
Shrestha, S.M., Shakya, A.: A customer churn prediction model using xg- boost for the telecommunication industry in nepal. Procedia Computer Science 215, 652–661 (2022). https://doi.org/https://doi.org/10.1016/j.procs.2022.12.067, https://www.sciencedirect.com/science/article/pii/S187705092202138X, 4th In- ternational Conference on Innovative Data Communi...
-
[42]
Soos, M., Aggarwal, D., Chakraborty, S., Meel, K.S., Obremski, M.: Engineering an efficient approximate dnf-counter. In: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelli- gence, IJCAI 2023, 19th-25th August 2023, Macao, SAR, China. pp. 2031–2038. ijcai.org (2023). https://doi.org/10.24963/IJCAI.2023/226, https://doi.or...
-
[43]
Soos, M., Gocht, S., Meel, K.S.: Tinted, detached, and lazy CNF-XOR solving and its applications to counting and sampling. In: Lahiri, S.K., Wang, C. (eds.) Computer Aided Verification - 32nd International Conference, CAV 2020, Los An- geles, CA, USA, July 21-24, 2020, Proceedings, Part I. Lecture Notes in Computer Science, vol. 12224, pp. 463–484. Spring...
-
[44]
In: Proceedings of the International Conference on Computer Aided Verification (CAV) (2025)
Soos, M., Meel, K.S.: Engineering an efficient probabilistic exact model counter. In: Proceedings of the International Conference on Computer Aided Verification (CAV) (2025)
work page 2025
-
[45]
Varshney, N., Gupta, A., Ahmad, A., Tayal, T.V., Akshay, S.: Data- aware and scalable sensitivity analysis for decision tree ensembles. In: The Fourteenth International Conference on Learning Representations, ICLR 2026, Rio de Janeiro, April 23-27, 2026. OpenReview.net (2026), https://openreview.net/pdf?id=q8KqAvdfZK
work page 2026
-
[46]
Wang, Y., Zhang, H., Chen, H., Boning, D.S., Hsieh, C.: On lp-norm robustness of ensemble decision stumps and trees. In: Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event. Pro- ceedings of Machine Learning Research, vol. 119, pp. 10104–10114. PMLR (2020), http://proceedings.mlr.press/v119/wang20aa.html
work page 2020
-
[47]
after processing the firstisubproblems, the algorithm has sampling probabilitypj
Yang, J., Meel, K.S.: Rounding meets approximate model counting. In: Enea, C., Lal, A. (eds.) Computer Aided Verification - 35th International Con- ference, CAV 2023, Paris, France, July 17-22, 2023, Proceedings, Part II. Lecture Notes in Computer Science, vol. 13965, pp. 132–162. Springer (2023). https://doi.org/10.1007/978-3-031-37703-7\_7, https://doi....
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.