Recognition: 2 theorem links
· Lean TheoremQuantitative Linear Logic for Neuro-Symbolic Learning and Verification
Pith reviewed 2026-05-15 05:36 UTC · model grok-4.3
The pith
Quantitative Linear Logic uses sum and log-sum-exp on logits to embed constraints in neural training while satisfying most linear logic laws.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Quantitative Linear Logic (QLL) interprets logical connectives as sum and log-sum-exp operations on additive quantities such as logits; under this semantics most of the standard laws of linear logic remain valid, and the resulting training objectives produce models whose test-time performance under adversarial attack correlates with the verdicts of off-the-shelf neural-network verifiers.
What carries the argument
The semantics of Quantitative Linear Logic, which map each connective to sum or log-sum-exp applied to logits so that the loss terms remain compatible with standard gradient-based training while preserving most linear-logic identities.
If this is right
- Neural networks trained with QLL-encoded constraints exhibit test-time robustness that aligns with formal verification outcomes.
- Adversarial testing can serve as a practical proxy for verification when full formal checks are expensive.
- QLL supplies a single differentiable logic that balances logical adequacy with analytic suitability for gradient descent.
- The same loss terms can be used both to enforce logical properties during training and to predict verifiability at deployment.
Where Pith is reading between the lines
- The same naturality principle could be applied to define quantitative versions of other substructural logics for neuro-symbolic tasks.
- Integration with existing neural verifiers becomes more direct because the training objective already mirrors the operations those verifiers analyze.
- Scaling QLL to larger models may reduce the gap between what can be trained and what can be formally certified.
- The correlation result suggests a route to hybrid training-verification pipelines where partial verification feedback refines the loss.
Load-bearing premise
The particular choice of sum and log-sum-exp operations on logits will simultaneously preserve most linear logic laws and produce measurable correlation between adversarial robustness and formal verification results.
What would settle it
Train a network under QLL constraints on a benchmark dataset, then run both adversarial attacks and an off-the-shelf verifier on the same models; if the two measures show no consistent correlation across several architectures and constraint sets, the empirical claim does not hold.
Figures
read the original abstract
Differentiable Logics are deployed in neuro-symbolic learning tasks as a way of embedding logical constraints in the training objective of neural networks. A differentiable logic consists of a syntax to write logical properties and a semantics to interpret them as real-valued functions to be folded in the loss function. A defining trade-off of the field is that between logical properties of the connectives, and analytic concerns for the semantics, with both aspects being relevant in applications. At one extreme we find fuzzy logics, that have well-established algebraic and proof-theoretic foundations, and at the other ad-hoc differentiable logics like Fischer's DL2, conceived for deep learning applications. However, no satisfactory foundation has emerged yet. We propose a resolution to this long-standing tension via a novel logic, Quantitative Linear Logic (QLL), with foundational ambitions. Our design is driven by naturality -- the idea that, since logical constraints are translated to losses, the semantics of the connectives should be pertinent operations used in ML practice (that is, sum and log-sum-exp) on additive quantities (like logits). We then judge the result on two aspects: logical adequacy -- that they satisfy most of the standard logical laws of Linear Logic; and empirical effectiveness -- test-time performance (as measured by adversarial attacks) is well-correlated to the actual verification of the logical constraints (as measured by off-the-shelf neural network verifiers), which makes QLL stand out among SoTA techniques.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces Quantitative Linear Logic (QLL) as a differentiable logic for neuro-symbolic learning. It interprets logical connectives via sum (additive) and log-sum-exp (multiplicative) operations on logits, motivated by naturality with machine-learning practice. The central claims are logical adequacy (QLL satisfies most standard laws of linear logic) and empirical effectiveness (test-time adversarial robustness correlates with formal verification results obtained from off-the-shelf neural-network verifiers, outperforming state-of-the-art techniques).
Significance. If the chosen semantics preserve a sufficiently rich fragment of linear logic while remaining differentiable, QLL could supply a principled foundation that resolves the long-standing tension between algebraic/proof-theoretic properties and gradient-based training. This would be significant for neuro-symbolic systems that require both trainable constraints and verifiable guarantees.
major comments (2)
- [Abstract] Abstract: the assertion that the sum and log-sum-exp semantics 'satisfy most of the standard logical laws of Linear Logic' is load-bearing for the foundational claim, yet the abstract supplies neither the explicit semantic equations for the connectives nor an enumeration of which laws (e.g., tensor associativity/commutativity, linear-implication adjunction, or exponential promotion) survive. Without these, it is impossible to assess whether the surviving fragment is adequate for constraint embedding and verification.
- [Abstract] Abstract: the empirical claim that 'test-time performance (as measured by adversarial attacks) is well-correlated to the actual verification of the logical constraints' is presented as distinguishing QLL from SoTA, but no quantitative metrics, datasets, or verification protocols are referenced. This correlation is central to the practical contribution and must be supported by explicit results.
minor comments (1)
- [Abstract] The abstract would benefit from a concise statement of the precise operations chosen for the additive and multiplicative fragments and a pointer to the section containing the semantic definitions.
Simulated Author's Rebuttal
We thank the referee for the constructive comments on the abstract. We address each point below and will revise the manuscript to improve clarity on both the logical fragment and the empirical evaluation.
read point-by-point responses
-
Referee: [Abstract] Abstract: the assertion that the sum and log-sum-exp semantics 'satisfy most of the standard logical laws of Linear Logic' is load-bearing for the foundational claim, yet the abstract supplies neither the explicit semantic equations for the connectives nor an enumeration of which laws (e.g., tensor associativity/commutativity, linear-implication adjunction, or exponential promotion) survive. Without these, it is impossible to assess whether the surviving fragment is adequate for constraint embedding and verification.
Authors: We agree that the abstract should be more precise. In the revision we will add the semantic equations (additive connectives interpreted via summation on logits; multiplicative connectives via log-sum-exp) and enumerate the preserved laws, specifically tensor associativity, commutativity, and the linear-implication adjunction, while noting that full exponential promotion does not hold. This will make the adequacy of the fragment for constraint embedding immediately verifiable from the abstract. revision: yes
-
Referee: [Abstract] Abstract: the empirical claim that 'test-time performance (as measured by adversarial attacks) is well-correlated to the actual verification of the logical constraints' is presented as distinguishing QLL from SoTA, but no quantitative metrics, datasets, or verification protocols are referenced. This correlation is central to the practical contribution and must be supported by explicit results.
Authors: The body of the manuscript already contains the full experimental details, including datasets, adversarial attack protocols, off-the-shelf verifier usage, and quantitative correlation metrics showing QLL outperforming prior approaches. We will revise the abstract to reference these key results concisely (e.g., correlation strength and verification protocol) so the empirical distinction is evident at the abstract level. revision: yes
Circularity Check
No significant circularity: derivation rests on external logical laws and off-the-shelf verifiers
full rationale
The paper selects sum and log-sum-exp as semantics for QLL connectives because they are standard ML operations on logits, then separately checks that 'most' linear logic laws hold and that test-time adversarial robustness correlates with results from external neural-network verifiers. No equations or fitted parameters are presented that reduce a claimed prediction back to the input choice by construction; the logical-adequacy claim is evaluated against standard linear-logic axioms rather than self-defined ones, and the empirical claim uses independent verification tools. The central design step is therefore an ansatz justified by domain practice, not a self-referential loop.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Logical constraints translate to losses via sum and log-sum-exp on logits
invented entities (1)
-
Quantitative Linear Logic (QLL)
no independent evidence
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Our design is driven by naturality—the idea that... semantics of the connectives should be pertinent operations used in ML practice (that is, sum and log-sum-exp) on additive quantities (like logits).
-
IndisputableMonolith/Foundation/AlphaCoordinateFixation.leancostAlphaLog_high_calibrated_iff unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We think of these as a 'quantitative' version of a residuated lattice (a softale)
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Forward citations
Cited by 2 Pith papers
-
Quantitative Linear Logic
pQLL calculi make proof validity and sequent provability real-valued quantities, generalizing hypersequent calculi and deep inference while proving cut-elimination and completeness for soft residuated lattices.
-
Quantitative Linear Logic
pQLL calculi assign real-valued strength to proofs, generalize hypersequent and deep inference systems, prove cut elimination, and achieve completeness for soft residuated lattices, recovering MALL as p goes to infinity.
Reference graph
Works this paper leans on
-
[1]
Affeldt, Reynald and Bruni, Alessandro and Komendantskaya, Ekaterina and. A. doi:10.48550/ARXIV.2602.23878 , urldate =
-
[2]
Affeldt, Reynald and Bruni, Alessandro and Komendantskaya, Ekaterina and. Taming. 15th. doi:10.4230/LIPIcs.ITP.2024.4 , urldate =
-
[3]
Proceedings of the AAAI Conference on Artificial Intelligence , volume =
Ahmed, Kareem and Li, Tao and Ton, Thy and Guo, Quan and Chang, Kai-Wei and Kordjamshidi, Parisa and Srikumar, Vivek and den Broeck, Guy Van and Singh, Sameer , year =. Proceedings of the AAAI Conference on Artificial Intelligence , volume =. doi:10.1609/aaai.v36i11.21711 , issn =
- [4]
-
[5]
Albarghouthi, Aws , year =. Introduction to. doi:10.48550/arXiv.2109.10317 , urldate =. 2109.10317 , primaryclass =
-
[6]
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
Xiao, Han and Rasul, Kashif and Vollgraf, Roland , year = 2017, month = sep, number =. Fashion-. doi:10.48550/arXiv.1708.07747 , archiveprefix =. 1708.07747 , primaryclass =
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.1708.07747 2017
-
[7]
Alberti, Michele and Bobot, Fran. The. Integrated. doi:10.1007/978-3-032-10794-7_15 , isbn =
-
[8]
Athalye, Anish and Carlini, Nicholas and Wagner, David , year =. Obfuscated. Proceedings of the 35th
-
[9]
Athalye, Anish and Engstrom, Logan and Ilyas, Andrew and Kwok, Kevin , year =. Synthesizing. Proceedings of the 35th
-
[10]
The multiplicative property characterizes $\ell_p$ and $L_p$ norms
Aubrun, Guillaume and Nechita, Ion , year = 2011, month = dec, journal =. The Multiplicative Property Characterizes \ ell\_p\ and \. doi:10.1142/S1793744211000485 , url =. arXiv , file =:1102.2618 , primaryclass =
work page internal anchor Pith review Pith/arXiv arXiv doi:10.1142/s1793744211000485 2011
-
[11]
Theoretical Computer Science , volume =
The Semantics and Proof Theory of Linear Logic , author =. Theoretical Computer Science , volume =. doi:10.1016/0304-3975(88)90037-0 , url =
-
[12]
Baaz, Matthias and Ciabattoni, Agata and Ferm. Hypersequent. 2003 , month = dec, journal =. doi:10.1093/logcom/13.6.835 , issn =
-
[13]
Bacci, Giorgio and Mardare, Radu and Panangaden, Prakash and Plotkin, Gordon , year = 2023, month = feb, number =. Propositional. doi:10.48550/arXiv.2302.01224 , urldate =. 2302.01224 , primaryclass =
-
[14]
Bak, Stanley and Liu, Changliu and Johnson, Taylor , year =. The. doi:10.48550/arXiv.2109.00498 , urldate =. 2109.00498 , primaryclass =
-
[15]
Basold, Henning and Komendantskaya, Ekaterina and Li, Yue , year =. Coinduction in. Programming. doi:10.1007/978-3-030-17184-1_28 , isbn =
-
[16]
Flinkow, Thomas and Casadio, Marco and Kessler, Colin and Monahan, Rosemary and Komendantskaya, Ekaterina , year = 2025, month = jun, number =. A. doi:10.48550/arXiv.2505.00466 , archiveprefix =. 2505.00466 , primaryclass =
-
[17]
Proceedings of the AAAI Conference on Artificial Intelligence , author =
The Future Is Neuro-Symbolic: Where Has It Been, and Where Is It Going? , volume =. Proceedings of the AAAI Conference on Artificial Intelligence , author =. 2026 , month =. doi:10.1609/aaai.v40i48.42130 , abstractnote =
-
[18]
Bj. Horn. 2015 , booktitle =. doi:10.1007/978-3-319-23534-9_2 , isbn =
-
[19]
Brix, Christopher and Bak, Stanley and Johnson, Taylor T. and Wu, Haoze , year =. The. doi:10.48550/arXiv.2412.19985 , urldate =. 2412.19985 , primaryclass =
-
[20]
First Three Years of the International Verification of Neural Networks Competition (
Brix, Christopher and M. First Three Years of the International Verification of Neural Networks Competition (. 2023 , month = jun, journal =. doi:10.1007/s10009-023-00703-4 , issn =
-
[21]
Brix, Christopher and Bak, Stanley and Liu, Changliu and Johnson, Taylor T. , year =. The. doi:10.48550/arXiv.2312.16760 , urldate =. 2312.16760 , primaryclass =
-
[22]
Branch and Bound for Piecewise Linear Neural Network Verification , author =. 2020 , journal =
work page 2020
-
[23]
Stallkamp, Johannes and Schlipsing, Marc and Salmen, Jan and Igel, Christian , year = 2011, month = jul, pages =. The. The 2011. doi:10.1109/IJCNN.2011.6033395 , urldate =
-
[24]
On Three-Valued Logic , booktitle =
Jan. On Three-Valued Logic , booktitle =. 1970 , pages =
work page 1970
-
[25]
Capucci, Matteo and Atkey, Robert and Grellois, Charles and Komendantskaya, Ekaterina , year = 2026, month = may, publisher =. Quantitative. doi:10.48550/ARXIV.2605.13348 , url =
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.2605.13348 2026
-
[26]
Capucci, Matteo , year = 2024, month = jun, number =. On. doi:10.48550/arXiv.2406.04936 , url =. arXiv , copyright =:2406.04936 , primaryclass =
-
[27]
Casadio, Marco and Arnaboldi, Luca and Daggitt, Matthew and Isac, Omri and Dinkar, Tanvi and Kienitz, Daniel and Rieser, Verena and Komendantskaya, Ekaterina , booktitle =. doi:10.29007/7wxb , urldate =
-
[28]
and Kokke, Wen and Katz, Guy and Amir, Guy and Refaeli, Idan , year =
Casadio, Marco and Komendantskaya, Ekaterina and Daggitt, Matthew L. and Kokke, Wen and Katz, Guy and Amir, Guy and Refaeli, Idan , year =. Neural. Computer. doi:10.1007/978-3-031-13185-1_11 , isbn =
-
[29]
and Isac, Omri and Katz, Guy and Rieser, Verena and Lemon, Oliver , year =
Casadio, Marco and Dinkar, Tanvi and Komendantskaya, Ekaterina and Arnaboldi, Luca and Daggitt, Matthew L. and Isac, Omri and Katz, Guy and Rieser, Verena and Lemon, Oliver , year =. European Journal of Applied Mathematics , pages =. doi:10.1017/S0956792525000099 , issn =
-
[30]
Certini, Daniele , year =. Flight of. doi:10.7488/era/3088 , langid =
-
[31]
Chen, Zhao and Badrinarayanan, Vijay and Lee, Chen-Yu and Rabinovich, Andrew , year =. Proceedings of the 35th
-
[32]
, year = 2025, month = jan, number =
Chevallier, Mark and Smola, Filip and Schmoetten, Richard and Fleuriot, Jacques D. , year = 2025, month = jan, number =. Formally. doi:10.48550/arXiv.2501.13712 , urldate =. 2501.13712 , primaryclass =
-
[33]
, year = 2025, month = aug, number =
Chevallier, Mark and Smola, Filip and Schmoetten, Richard and Fleuriot, Jacques D. , year = 2025, month = aug, number =. doi:10.48550/arXiv.2508.04438 , urldate =. 2508.04438 , primaryclass =
-
[34]
Handbook of Mathematical Fuzzy Logic
Cintula, Petr and H. Handbook of Mathematical Fuzzy Logic. 2011 , publisher =
work page 2011
-
[35]
Cordeiro, Lucas C. and Daggitt, Matthew L. and. Neural. Programming. doi:10.1007/978-3-031-91118-7_9 , isbn =
-
[36]
2020 , month = nov, booktitle =
Reliable Evaluation of Adversarial Robustness with an Ensemble of Diverse Parameter-Free Attacks , author =. 2020 , month = nov, booktitle =
work page 2020
-
[37]
Croce, Francesco and Rauber, Jonas and Hein, Matthias , year =. Scaling up the. International Journal of Computer Vision , volume =. doi:10.1007/s11263-019-01213-0 , issn =
-
[38]
and Atkey, Robert and Kokke, Wen and Komendantskaya, Ekaterina and Arnaboldi, Luca , year =
Daggitt, Matthew L. and Atkey, Robert and Kokke, Wen and Komendantskaya, Ekaterina and Arnaboldi, Luca , year =. Compiling. Proceedings of the 12th. doi:10.1145/3573105.3575674 , isbn =
-
[39]
2024 , month = jan, publisher =
Efficient Compilation of Expressive Problem Space Specifications to Neural Network Solvers , author =. 2024 , month = jan, publisher =. doi:10.48550/arXiv.2402.01353 , urldate =. 2402.01353 , primaryclass =
-
[40]
LIPIcs, Volume 337, FSCD 2025 337 (2025), 2:1–2:20
Daggitt, Matthew L. and Kokke, Wen and Atkey, Robert and Komendantskaya, Ekaterina and Slusarz, Natalia and Arnaboldi, Luca , editor =. Vehicle:. 10th. doi:10.4230/LIPIcs.FSCD.2025.2 , urldate =
-
[41]
and Kokke, Wen and Atkey, Robert and Arnaboldi, Luca and Komendantskya, Ekaterina , year =
Daggitt, Matthew L. and Kokke, Wen and Atkey, Robert and Arnaboldi, Luca and Komendantskya, Ekaterina , year =. Vehicle:. 2202.05207 , primaryclass =
-
[42]
Daggitt, Matthew and Kokke, Wen and Komendantskaya, Ekaterina and Atkey, Robert and Arnaboldi, Luca and Slusarz, Natalia and Casadio, Marco and Coke, Ben and Lee, Jeonghyeon , year =. The. Proceedings of the 6th. doi:10.29007/5s2x , issn =
-
[43]
Demarchi, Stefano and Guidotti, Dario and Pulina, Luca and Tacchella, Armando , pages =. Supporting. Proceedings of the 6th. 2023 , month = oct, doi =
work page 2023
-
[44]
Semantic-Based Regularization for Learning and Inference , author =. 2017 , month = mar, journal =. doi:10.1016/j.artint.2015.08.011 , issn =
-
[45]
Dugas, Charles and Bengio, Yoshua and B. Incorporating. 2000 , booktitle =
work page 2000
-
[46]
Duong, Hai and Nguyen, ThanhVu and Dwyer, Matthew B. , year =. Computer. doi:10.1007/978-3-031-98679-6_19 , isbn =
-
[47]
Farrell, Marie and Mavridou, Anastasia and Schumann, Johann , year =. Exploring. Requirements. doi:10.1007/978-3-031-29786-1_12 , isbn =
-
[48]
Jonathan Feldstein and Paulius Dilkas and Vaishak Belle and Efthymia Tsamoura , year =. Mapping the Neuro-Symbolic. CoRR , volume =. doi:10.48550/ARXIV.2410.22077 , url =. 2410.22077 , timestamp =
-
[49]
2019 , month = may, booktitle =
Fischer, Marc and Balunovic, Mislav and. 2019 , month = may, booktitle =
work page 2019
-
[50]
Comparing Differentiable Logics for Learning with Logical Constraints , author =. 2025 , month = sep, journal =. doi:10.1016/j.scico.2025.103280 , issn =
-
[51]
property-driven-ml: A general framework for property-driven machine learning , author =. 2025 , urldate =
work page 2025
-
[52]
Flood, Robert and Casadio, Marco and Aspinall, David and Komendantskaya, Ekaterina , year =. Proceedings of the 40th. doi:10.1145/3672608.3707927 , isbn =
- [53]
-
[54]
Galatos, Nikolaos and Jipsen, Peter and Kowalski, Tomasz and Ono, Hiroakira , year =. Residuated
-
[55]
doi:10.48550/arXiv.2206.03044 , urldate =. 2206.03044 , primaryclass =
-
[56]
Jean-Yves Girard , abstract =. Linear logic , journal =. 1987 , issn =. doi:https://doi.org/10.1016/0304-3975(87)90045-4 , url =
-
[57]
Linear Logic: Its Syntax and Semantics , shorttitle =
Girard, Jean-Yves , year =. Linear Logic: Its Syntax and Semantics , shorttitle =. Proceedings of the Workshop on
-
[58]
Giunchiglia, Eleonora and Tatomir, Alex and Stoian, Mihaela C. 2024 , month = aug, journal =. doi:10.1016/j.ijar.2024.109124 , issn =
-
[59]
Giunchiglia, Eleonora and Stoian, Mihaela Catalina and Lukasiewicz, Thomas , year =. Deep. Proceedings of the. doi:10.24963/ijcai.2022/767 , isbn =
-
[60]
Glorot, Xavier and Bordes, Antoine and Bengio, Yoshua , year =. Deep. Proceedings of the
-
[61]
Explaining and Harnessing Adversarial Examples
Goodfellow, Ian J. and Shlens, Jonathon and Szegedy, Christian , year =. Explaining and. doi:10.48550/arXiv.1412.6572 , urldate =. 1412.6572 , primaryclass =
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.1412.6572
-
[62]
Gowal, Sven and Dvijotham, Krishnamurthy and Stanforth, Robert and Bunel, Rudy and Qin, Chongli and Uesato, Jonathan and Arandjelovic, Relja and Mann, Timothy Arthur and Kohli, Pushmeet , year =. Scalable. 2019. doi:10.1109/ICCV.2019.00494 , issn =
-
[63]
Reduced Implication-Bias Logic Loss for Neuro-Symbolic Learning , author =. 2024 , month = jun, journal =. doi:10.1007/s10994-023-06436-4 , issn =
-
[64]
Huang, Xiaowei and Kwiatkowska, Marta and Wang, Sen and Wu, Min , editor =. Safety. Computer. doi:10.1007/978-3-319-63387-9_1 , isbn =
-
[65]
A Survey of Safety and Trustworthiness of Deep Neural Networks:
Huang, Xiaowei and Kroening, Daniel and Ruan, Wenjie and Sharp, James and Sun, Youcheng and Thamo, Emese and Wu, Min and Yi, Xinping , year =. A Survey of Safety and Trustworthiness of Deep Neural Networks:. Computer Science Review , volume =. doi:10.1016/j.cosrev.2020.100270 , issn =
-
[66]
2016 , month = sep, booktitle =
Policy Compression for Aircraft Collision Avoidance Systems , author =. 2016 , month = sep, booktitle =. doi:10.1109/DASC.2016.7778091 , issn =
-
[67]
Bernardy, Jean-Philippe and Boespflug, Mathieu and Newton, Ryan R. and. Linear. Proceedings of the
-
[68]
Katz, Guy and Huang, Derek A. and Ibeling, Duligur and Julian, Kyle and Lazarus, Christopher and Lim, Rachel and Shah, Parth and Thakoor, Shantanu and Wu, Haoze and Zelji. The. 2019 , booktitle =. doi:10.1007/978-3-030-25540-4_26 , isbn =
-
[69]
and Julian, Kyle and Kochenderfer, Mykel J
Katz, Guy and Barrett, Clark and Dill, David L. and Julian, Kyle and Kochenderfer, Mykel J. , year =. Reluplex:. Computer. doi:10.1007/978-3-319-63387-9_5 , isbn =
-
[70]
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Katz, Guy and Barrett, Clark and Dill, David and Julian, Kyle and Kochenderfer, Mykel , year =. Reluplex:. doi:10.48550/arXiv.1702.01135 , urldate =. 1702.01135 , primaryclass =
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.1702.01135
-
[71]
and Koller, Lukas and Manino, Edoardo and Nguyen, ThanhVu H
Kaulen, Konstantin and Ladner, Tobias and Bak, Stanley and Brix, Christopher and Duong, Hai and Flinkow, Thomas and Johnson, Taylor T. and Koller, Lukas and Manino, Edoardo and Nguyen, ThanhVu H. and Wu, Haoze , year =. The 6th. doi:10.48550/arXiv.2512.19007 , urldate =. 2512.19007 , primaryclass =
-
[72]
Kessler, Colin and Komendantskaya, Ekaterina and Casadio, Marco and Viola, Ignazio Maria and Flinkow, Thomas and Othman, Albaraa Ammar and Malhotra, Alistair and McPherson, Robbie , year =. Neural. doi:10.1007/978-3-031-99991-8_9 , isbn =
- [73]
-
[74]
and Amato, Christopher and Chowdhary, Girish and How, Jonathan P
Kochenderfer, Mykel J. and Amato, Christopher and Chowdhary, Girish and How, Jonathan P. and Reynolds, Hayley J. Davison and Thornton, Jason R. and. Optimized. 2015 , booktitle =
work page 2015
-
[75]
Kochenderfer, Mykel J and Chryssanthacopoulos, James P , year =. Robust
-
[76]
Komendantskaya, Ekaterina and Kokke, Wen and Kienitz, Daniel , year =. Continuous. Proceedings of the 22nd. doi:10.1145/3414080.3414081 , isbn =
- [77]
-
[78]
Kurscheidt, Leander and Morettin, Paolo and Sebastiani, Roberto and Passerini, Andrea and Vergari, Antonio , year =. A. doi:10.48550/arXiv.2503.19466 , urldate =. 2503.19466 , primaryclass =
-
[79]
Rendiconti del seminario mat\'ematico e fisico di Milano , volume =
Metric Spaces, Generalized Logic, and Closed Categories , author =. Rendiconti del seminario mat\'ematico e fisico di Milano , volume =
-
[80]
Gradient-Based Learning Applied to Document Recognition , author =. 1998 , month = nov, journal =. doi:10.1109/5.726791 , issn =
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.